00:00:00.001 Started by upstream project "autotest-per-patch" build number 132774 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.074 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:02.194 The recommended git tool is: git 00:00:02.194 using credential 00000000-0000-0000-0000-000000000002 00:00:02.196 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:02.209 Fetching changes from the remote Git repository 00:00:02.213 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:02.225 Using shallow fetch with depth 1 00:00:02.225 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:02.225 > git --version # timeout=10 00:00:02.237 > git --version # 'git version 2.39.2' 00:00:02.237 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:02.249 Setting http proxy: proxy-dmz.intel.com:911 00:00:02.249 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.616 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.628 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.641 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.641 > git config core.sparsecheckout # timeout=10 00:00:07.657 > git read-tree -mu HEAD # timeout=10 00:00:07.674 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.701 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.701 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.818 [Pipeline] Start of Pipeline 00:00:07.834 [Pipeline] library 00:00:07.836 Loading library shm_lib@master 00:00:07.836 Library shm_lib@master is cached. Copying from home. 00:00:07.851 [Pipeline] node 01:01:53.555 Still waiting to schedule task 01:01:53.556 Waiting for next available executor on ‘vagrant-vm-host’ 01:10:50.875 Running on VM-host-WFP7 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 01:10:50.877 [Pipeline] { 01:10:50.890 [Pipeline] catchError 01:10:50.892 [Pipeline] { 01:10:50.908 [Pipeline] wrap 01:10:50.918 [Pipeline] { 01:10:50.928 [Pipeline] stage 01:10:50.930 [Pipeline] { (Prologue) 01:10:50.953 [Pipeline] echo 01:10:50.955 Node: VM-host-WFP7 01:10:50.962 [Pipeline] cleanWs 01:10:50.973 [WS-CLEANUP] Deleting project workspace... 01:10:50.973 [WS-CLEANUP] Deferred wipeout is used... 01:10:50.980 [WS-CLEANUP] done 01:10:51.291 [Pipeline] setCustomBuildProperty 01:10:51.383 [Pipeline] httpRequest 01:10:51.788 [Pipeline] echo 01:10:51.789 Sorcerer 10.211.164.101 is alive 01:10:51.799 [Pipeline] retry 01:10:51.800 [Pipeline] { 01:10:51.813 [Pipeline] httpRequest 01:10:51.819 HttpMethod: GET 01:10:51.819 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 01:10:51.820 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 01:10:51.821 Response Code: HTTP/1.1 200 OK 01:10:51.821 Success: Status code 200 is in the accepted range: 200,404 01:10:51.822 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 01:10:51.967 [Pipeline] } 01:10:51.986 [Pipeline] // retry 01:10:51.995 [Pipeline] sh 01:10:52.280 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 01:10:52.296 [Pipeline] httpRequest 01:10:52.696 [Pipeline] echo 01:10:52.698 Sorcerer 10.211.164.101 is alive 01:10:52.709 [Pipeline] retry 01:10:52.712 [Pipeline] { 01:10:52.727 [Pipeline] httpRequest 01:10:52.732 HttpMethod: GET 01:10:52.733 URL: http://10.211.164.101/packages/spdk_cabd61f7fcfe4266fd041fd1c59711acd76f4aff.tar.gz 01:10:52.734 Sending request to url: http://10.211.164.101/packages/spdk_cabd61f7fcfe4266fd041fd1c59711acd76f4aff.tar.gz 01:10:52.738 Response Code: HTTP/1.1 200 OK 01:10:52.739 Success: Status code 200 is in the accepted range: 200,404 01:10:52.744 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_cabd61f7fcfe4266fd041fd1c59711acd76f4aff.tar.gz 01:10:55.004 [Pipeline] } 01:10:55.018 [Pipeline] // retry 01:10:55.024 [Pipeline] sh 01:10:55.304 + tar --no-same-owner -xf spdk_cabd61f7fcfe4266fd041fd1c59711acd76f4aff.tar.gz 01:10:57.847 [Pipeline] sh 01:10:58.126 + git -C spdk log --oneline -n5 01:10:58.126 cabd61f7f env: extend the page table to support 4-KiB mapping 01:10:58.126 66902d69a env: explicitly set --legacy-mem flag in no hugepages mode 01:10:58.126 421ce3854 env: add mem_map_fini and vtophys_fini to cleanup mem maps 01:10:58.126 35cd3e84d bdev/part: Pass through dif_check_flags via dif_check_flags_exclude_mask 01:10:58.126 01a2c4855 bdev/passthru: Pass through dif_check_flags via dif_check_flags_exclude_mask 01:10:58.143 [Pipeline] writeFile 01:10:58.157 [Pipeline] sh 01:10:58.442 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 01:10:58.460 [Pipeline] sh 01:10:58.739 + cat autorun-spdk.conf 01:10:58.739 SPDK_RUN_FUNCTIONAL_TEST=1 01:10:58.739 SPDK_TEST_NVMF=1 01:10:58.739 SPDK_TEST_NVMF_TRANSPORT=tcp 01:10:58.739 SPDK_TEST_URING=1 01:10:58.739 SPDK_TEST_USDT=1 01:10:58.739 SPDK_RUN_UBSAN=1 01:10:58.739 NET_TYPE=virt 01:10:58.739 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 01:10:58.743 RUN_NIGHTLY=0 01:10:58.745 [Pipeline] } 01:10:58.755 [Pipeline] // stage 01:10:58.765 [Pipeline] stage 01:10:58.767 [Pipeline] { (Run VM) 01:10:58.775 [Pipeline] sh 01:10:59.052 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 01:10:59.052 + echo 'Start stage prepare_nvme.sh' 01:10:59.052 Start stage prepare_nvme.sh 01:10:59.052 + [[ -n 5 ]] 01:10:59.052 + disk_prefix=ex5 01:10:59.052 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 01:10:59.052 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 01:10:59.052 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 01:10:59.052 ++ SPDK_RUN_FUNCTIONAL_TEST=1 01:10:59.052 ++ SPDK_TEST_NVMF=1 01:10:59.052 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 01:10:59.052 ++ SPDK_TEST_URING=1 01:10:59.052 ++ SPDK_TEST_USDT=1 01:10:59.052 ++ SPDK_RUN_UBSAN=1 01:10:59.052 ++ NET_TYPE=virt 01:10:59.052 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 01:10:59.052 ++ RUN_NIGHTLY=0 01:10:59.052 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 01:10:59.052 + nvme_files=() 01:10:59.052 + declare -A nvme_files 01:10:59.052 + backend_dir=/var/lib/libvirt/images/backends 01:10:59.052 + nvme_files['nvme.img']=5G 01:10:59.052 + nvme_files['nvme-cmb.img']=5G 01:10:59.052 + nvme_files['nvme-multi0.img']=4G 01:10:59.052 + nvme_files['nvme-multi1.img']=4G 01:10:59.052 + nvme_files['nvme-multi2.img']=4G 01:10:59.052 + nvme_files['nvme-openstack.img']=8G 01:10:59.052 + nvme_files['nvme-zns.img']=5G 01:10:59.052 + (( SPDK_TEST_NVME_PMR == 1 )) 01:10:59.052 + (( SPDK_TEST_FTL == 1 )) 01:10:59.052 + (( SPDK_TEST_NVME_FDP == 1 )) 01:10:59.052 + [[ ! -d /var/lib/libvirt/images/backends ]] 01:10:59.052 + for nvme in "${!nvme_files[@]}" 01:10:59.052 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 01:10:59.052 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 01:10:59.052 + for nvme in "${!nvme_files[@]}" 01:10:59.052 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 01:10:59.052 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 01:10:59.052 + for nvme in "${!nvme_files[@]}" 01:10:59.052 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 01:10:59.052 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 01:10:59.052 + for nvme in "${!nvme_files[@]}" 01:10:59.052 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 01:10:59.052 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 01:10:59.052 + for nvme in "${!nvme_files[@]}" 01:10:59.052 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 01:10:59.052 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 01:10:59.052 + for nvme in "${!nvme_files[@]}" 01:10:59.052 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 01:10:59.052 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 01:10:59.052 + for nvme in "${!nvme_files[@]}" 01:10:59.052 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 01:10:59.052 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 01:10:59.052 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 01:10:59.052 + echo 'End stage prepare_nvme.sh' 01:10:59.052 End stage prepare_nvme.sh 01:10:59.063 [Pipeline] sh 01:10:59.346 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 01:10:59.346 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -H -a -v -f fedora39 01:10:59.346 01:10:59.346 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 01:10:59.346 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 01:10:59.346 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 01:10:59.346 HELP=0 01:10:59.346 DRY_RUN=0 01:10:59.347 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img, 01:10:59.347 NVME_DISKS_TYPE=nvme,nvme, 01:10:59.347 NVME_AUTO_CREATE=0 01:10:59.347 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img, 01:10:59.347 NVME_CMB=,, 01:10:59.347 NVME_PMR=,, 01:10:59.347 NVME_ZNS=,, 01:10:59.347 NVME_MS=,, 01:10:59.347 NVME_FDP=,, 01:10:59.347 SPDK_VAGRANT_DISTRO=fedora39 01:10:59.347 SPDK_VAGRANT_VMCPU=10 01:10:59.347 SPDK_VAGRANT_VMRAM=12288 01:10:59.347 SPDK_VAGRANT_PROVIDER=libvirt 01:10:59.347 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 01:10:59.347 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 01:10:59.347 SPDK_OPENSTACK_NETWORK=0 01:10:59.347 VAGRANT_PACKAGE_BOX=0 01:10:59.347 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 01:10:59.347 FORCE_DISTRO=true 01:10:59.347 VAGRANT_BOX_VERSION= 01:10:59.347 EXTRA_VAGRANTFILES= 01:10:59.347 NIC_MODEL=virtio 01:10:59.347 01:10:59.347 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 01:10:59.347 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 01:11:01.267 Bringing machine 'default' up with 'libvirt' provider... 01:11:01.836 ==> default: Creating image (snapshot of base box volume). 01:11:01.836 ==> default: Creating domain with the following settings... 01:11:01.836 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733720744_c404e4d05d7902b384ba 01:11:01.836 ==> default: -- Domain type: kvm 01:11:01.836 ==> default: -- Cpus: 10 01:11:01.836 ==> default: -- Feature: acpi 01:11:01.836 ==> default: -- Feature: apic 01:11:01.836 ==> default: -- Feature: pae 01:11:01.836 ==> default: -- Memory: 12288M 01:11:01.836 ==> default: -- Memory Backing: hugepages: 01:11:01.836 ==> default: -- Management MAC: 01:11:01.836 ==> default: -- Loader: 01:11:01.836 ==> default: -- Nvram: 01:11:01.836 ==> default: -- Base box: spdk/fedora39 01:11:01.836 ==> default: -- Storage pool: default 01:11:01.836 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733720744_c404e4d05d7902b384ba.img (20G) 01:11:01.836 ==> default: -- Volume Cache: default 01:11:01.836 ==> default: -- Kernel: 01:11:01.836 ==> default: -- Initrd: 01:11:01.836 ==> default: -- Graphics Type: vnc 01:11:01.836 ==> default: -- Graphics Port: -1 01:11:01.836 ==> default: -- Graphics IP: 127.0.0.1 01:11:01.836 ==> default: -- Graphics Password: Not defined 01:11:01.836 ==> default: -- Video Type: cirrus 01:11:01.836 ==> default: -- Video VRAM: 9216 01:11:01.836 ==> default: -- Sound Type: 01:11:01.836 ==> default: -- Keymap: en-us 01:11:01.836 ==> default: -- TPM Path: 01:11:01.836 ==> default: -- INPUT: type=mouse, bus=ps2 01:11:01.836 ==> default: -- Command line args: 01:11:01.836 ==> default: -> value=-device, 01:11:01.836 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 01:11:01.836 ==> default: -> value=-drive, 01:11:01.836 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-0-drive0, 01:11:01.836 ==> default: -> value=-device, 01:11:01.836 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 01:11:01.836 ==> default: -> value=-device, 01:11:01.836 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 01:11:01.836 ==> default: -> value=-drive, 01:11:01.836 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-1-drive0, 01:11:01.836 ==> default: -> value=-device, 01:11:01.836 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 01:11:01.836 ==> default: -> value=-drive, 01:11:01.836 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-1-drive1, 01:11:01.836 ==> default: -> value=-device, 01:11:01.837 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 01:11:01.837 ==> default: -> value=-drive, 01:11:01.837 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-1-drive2, 01:11:01.837 ==> default: -> value=-device, 01:11:01.837 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 01:11:02.095 ==> default: Creating shared folders metadata... 01:11:02.095 ==> default: Starting domain. 01:11:04.008 ==> default: Waiting for domain to get an IP address... 01:11:22.116 ==> default: Waiting for SSH to become available... 01:11:22.116 ==> default: Configuring and enabling network interfaces... 01:11:27.390 default: SSH address: 192.168.121.178:22 01:11:27.390 default: SSH username: vagrant 01:11:27.390 default: SSH auth method: private key 01:11:29.929 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 01:11:38.058 ==> default: Mounting SSHFS shared folder... 01:11:40.621 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 01:11:40.621 ==> default: Checking Mount.. 01:11:42.017 ==> default: Folder Successfully Mounted! 01:11:42.017 ==> default: Running provisioner: file... 01:11:42.963 default: ~/.gitconfig => .gitconfig 01:11:43.538 01:11:43.538 SUCCESS! 01:11:43.538 01:11:43.538 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 01:11:43.538 Use vagrant "suspend" and vagrant "resume" to stop and start. 01:11:43.538 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 01:11:43.538 01:11:43.546 [Pipeline] } 01:11:43.558 [Pipeline] // stage 01:11:43.567 [Pipeline] dir 01:11:43.567 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 01:11:43.569 [Pipeline] { 01:11:43.601 [Pipeline] catchError 01:11:43.602 [Pipeline] { 01:11:43.611 [Pipeline] sh 01:11:43.893 + vagrant ssh-config --host vagrant 01:11:43.893 + sed -ne /^Host/,$p 01:11:43.893 + tee ssh_conf 01:11:46.463 Host vagrant 01:11:46.463 HostName 192.168.121.178 01:11:46.463 User vagrant 01:11:46.463 Port 22 01:11:46.463 UserKnownHostsFile /dev/null 01:11:46.463 StrictHostKeyChecking no 01:11:46.463 PasswordAuthentication no 01:11:46.463 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 01:11:46.463 IdentitiesOnly yes 01:11:46.463 LogLevel FATAL 01:11:46.463 ForwardAgent yes 01:11:46.463 ForwardX11 yes 01:11:46.463 01:11:46.477 [Pipeline] withEnv 01:11:46.479 [Pipeline] { 01:11:46.490 [Pipeline] sh 01:11:46.768 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 01:11:46.768 source /etc/os-release 01:11:46.768 [[ -e /image.version ]] && img=$(< /image.version) 01:11:46.768 # Minimal, systemd-like check. 01:11:46.768 if [[ -e /.dockerenv ]]; then 01:11:46.768 # Clear garbage from the node's name: 01:11:46.768 # agt-er_autotest_547-896 -> autotest_547-896 01:11:46.768 # $HOSTNAME is the actual container id 01:11:46.768 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 01:11:46.768 if grep -q "/etc/hostname" /proc/self/mountinfo; then 01:11:46.768 # We can assume this is a mount from a host where container is running, 01:11:46.768 # so fetch its hostname to easily identify the target swarm worker. 01:11:46.768 container="$(< /etc/hostname) ($agent)" 01:11:46.768 else 01:11:46.768 # Fallback 01:11:46.768 container=$agent 01:11:46.768 fi 01:11:46.768 fi 01:11:46.768 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 01:11:46.768 01:11:47.037 [Pipeline] } 01:11:47.052 [Pipeline] // withEnv 01:11:47.061 [Pipeline] setCustomBuildProperty 01:11:47.075 [Pipeline] stage 01:11:47.077 [Pipeline] { (Tests) 01:11:47.095 [Pipeline] sh 01:11:47.376 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 01:11:47.647 [Pipeline] sh 01:11:47.925 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 01:11:48.201 [Pipeline] timeout 01:11:48.201 Timeout set to expire in 1 hr 0 min 01:11:48.203 [Pipeline] { 01:11:48.216 [Pipeline] sh 01:11:48.498 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 01:11:49.067 HEAD is now at cabd61f7f env: extend the page table to support 4-KiB mapping 01:11:49.080 [Pipeline] sh 01:11:49.366 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 01:11:49.640 [Pipeline] sh 01:11:49.921 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 01:11:50.241 [Pipeline] sh 01:11:50.525 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 01:11:50.785 ++ readlink -f spdk_repo 01:11:50.785 + DIR_ROOT=/home/vagrant/spdk_repo 01:11:50.785 + [[ -n /home/vagrant/spdk_repo ]] 01:11:50.785 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 01:11:50.785 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 01:11:50.785 + [[ -d /home/vagrant/spdk_repo/spdk ]] 01:11:50.785 + [[ ! -d /home/vagrant/spdk_repo/output ]] 01:11:50.785 + [[ -d /home/vagrant/spdk_repo/output ]] 01:11:50.785 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 01:11:50.785 + cd /home/vagrant/spdk_repo 01:11:50.785 + source /etc/os-release 01:11:50.785 ++ NAME='Fedora Linux' 01:11:50.785 ++ VERSION='39 (Cloud Edition)' 01:11:50.785 ++ ID=fedora 01:11:50.785 ++ VERSION_ID=39 01:11:50.785 ++ VERSION_CODENAME= 01:11:50.785 ++ PLATFORM_ID=platform:f39 01:11:50.785 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 01:11:50.785 ++ ANSI_COLOR='0;38;2;60;110;180' 01:11:50.785 ++ LOGO=fedora-logo-icon 01:11:50.785 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 01:11:50.785 ++ HOME_URL=https://fedoraproject.org/ 01:11:50.785 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 01:11:50.785 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 01:11:50.785 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 01:11:50.785 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 01:11:50.785 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 01:11:50.785 ++ REDHAT_SUPPORT_PRODUCT=Fedora 01:11:50.785 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 01:11:50.785 ++ SUPPORT_END=2024-11-12 01:11:50.785 ++ VARIANT='Cloud Edition' 01:11:50.785 ++ VARIANT_ID=cloud 01:11:50.785 + uname -a 01:11:50.785 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 01:11:50.785 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 01:11:51.355 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:11:51.355 Hugepages 01:11:51.355 node hugesize free / total 01:11:51.355 node0 1048576kB 0 / 0 01:11:51.355 node0 2048kB 0 / 0 01:11:51.355 01:11:51.355 Type BDF Vendor Device NUMA Driver Device Block devices 01:11:51.355 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 01:11:51.355 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 01:11:51.355 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 01:11:51.355 + rm -f /tmp/spdk-ld-path 01:11:51.355 + source autorun-spdk.conf 01:11:51.355 ++ SPDK_RUN_FUNCTIONAL_TEST=1 01:11:51.355 ++ SPDK_TEST_NVMF=1 01:11:51.355 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 01:11:51.355 ++ SPDK_TEST_URING=1 01:11:51.355 ++ SPDK_TEST_USDT=1 01:11:51.355 ++ SPDK_RUN_UBSAN=1 01:11:51.355 ++ NET_TYPE=virt 01:11:51.355 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 01:11:51.355 ++ RUN_NIGHTLY=0 01:11:51.355 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 01:11:51.355 + [[ -n '' ]] 01:11:51.355 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 01:11:51.355 + for M in /var/spdk/build-*-manifest.txt 01:11:51.355 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 01:11:51.355 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 01:11:51.355 + for M in /var/spdk/build-*-manifest.txt 01:11:51.355 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 01:11:51.355 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 01:11:51.614 + for M in /var/spdk/build-*-manifest.txt 01:11:51.614 + [[ -f /var/spdk/build-repo-manifest.txt ]] 01:11:51.614 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 01:11:51.614 ++ uname 01:11:51.614 + [[ Linux == \L\i\n\u\x ]] 01:11:51.614 + sudo dmesg -T 01:11:51.614 + sudo dmesg --clear 01:11:51.614 + dmesg_pid=5423 01:11:51.614 + [[ Fedora Linux == FreeBSD ]] 01:11:51.614 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 01:11:51.614 + UNBIND_ENTIRE_IOMMU_GROUP=yes 01:11:51.614 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 01:11:51.615 + [[ -x /usr/src/fio-static/fio ]] 01:11:51.615 + sudo dmesg -Tw 01:11:51.615 + export FIO_BIN=/usr/src/fio-static/fio 01:11:51.615 + FIO_BIN=/usr/src/fio-static/fio 01:11:51.615 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 01:11:51.615 + [[ ! -v VFIO_QEMU_BIN ]] 01:11:51.615 + [[ -e /usr/local/qemu/vfio-user-latest ]] 01:11:51.615 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 01:11:51.615 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 01:11:51.615 + [[ -e /usr/local/qemu/vanilla-latest ]] 01:11:51.615 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 01:11:51.615 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 01:11:51.615 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 01:11:51.615 05:06:34 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 01:11:51.615 05:06:34 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 01:11:51.615 05:06:34 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 01:11:51.615 05:06:34 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 01:11:51.615 05:06:34 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 01:11:51.615 05:06:34 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_URING=1 01:11:51.615 05:06:34 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_USDT=1 01:11:51.615 05:06:34 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 01:11:51.615 05:06:34 -- spdk_repo/autorun-spdk.conf@7 -- $ NET_TYPE=virt 01:11:51.615 05:06:34 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 01:11:51.615 05:06:34 -- spdk_repo/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 01:11:51.615 05:06:34 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 01:11:51.615 05:06:34 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 01:11:51.874 05:06:34 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 01:11:51.874 05:06:34 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:11:51.874 05:06:34 -- scripts/common.sh@15 -- $ shopt -s extglob 01:11:51.874 05:06:34 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 01:11:51.874 05:06:34 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:11:51.874 05:06:34 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 01:11:51.874 05:06:34 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:11:51.874 05:06:34 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:11:51.874 05:06:34 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:11:51.874 05:06:34 -- paths/export.sh@5 -- $ export PATH 01:11:51.874 05:06:34 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:11:51.874 05:06:34 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 01:11:51.874 05:06:34 -- common/autobuild_common.sh@493 -- $ date +%s 01:11:51.874 05:06:34 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733720794.XXXXXX 01:11:51.874 05:06:34 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733720794.EdDxzc 01:11:51.874 05:06:34 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 01:11:51.874 05:06:34 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 01:11:51.874 05:06:34 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 01:11:51.874 05:06:34 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 01:11:51.874 05:06:34 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 01:11:51.874 05:06:34 -- common/autobuild_common.sh@509 -- $ get_config_params 01:11:51.874 05:06:34 -- common/autotest_common.sh@409 -- $ xtrace_disable 01:11:51.874 05:06:34 -- common/autotest_common.sh@10 -- $ set +x 01:11:51.874 05:06:34 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 01:11:51.874 05:06:34 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 01:11:51.874 05:06:34 -- pm/common@17 -- $ local monitor 01:11:51.874 05:06:34 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 01:11:51.874 05:06:34 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 01:11:51.874 05:06:34 -- pm/common@25 -- $ sleep 1 01:11:51.874 05:06:34 -- pm/common@21 -- $ date +%s 01:11:51.874 05:06:34 -- pm/common@21 -- $ date +%s 01:11:51.874 05:06:34 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733720794 01:11:51.874 05:06:34 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733720794 01:11:51.874 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733720794_collect-cpu-load.pm.log 01:11:51.874 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733720794_collect-vmstat.pm.log 01:11:52.820 05:06:35 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 01:11:52.820 05:06:35 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 01:11:52.820 05:06:35 -- spdk/autobuild.sh@12 -- $ umask 022 01:11:52.820 05:06:35 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 01:11:52.820 05:06:35 -- spdk/autobuild.sh@16 -- $ date -u 01:11:52.820 Mon Dec 9 05:06:35 AM UTC 2024 01:11:52.820 05:06:35 -- spdk/autobuild.sh@17 -- $ git describe --tags 01:11:52.820 v25.01-pre-279-gcabd61f7f 01:11:52.820 05:06:35 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 01:11:52.820 05:06:35 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 01:11:52.820 05:06:35 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 01:11:52.820 05:06:35 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 01:11:52.820 05:06:35 -- common/autotest_common.sh@1111 -- $ xtrace_disable 01:11:52.820 05:06:35 -- common/autotest_common.sh@10 -- $ set +x 01:11:52.820 ************************************ 01:11:52.820 START TEST ubsan 01:11:52.820 ************************************ 01:11:52.821 using ubsan 01:11:52.821 05:06:35 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 01:11:52.821 01:11:52.821 real 0m0.001s 01:11:52.821 user 0m0.000s 01:11:52.821 sys 0m0.001s 01:11:52.821 05:06:35 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 01:11:52.821 05:06:35 ubsan -- common/autotest_common.sh@10 -- $ set +x 01:11:52.821 ************************************ 01:11:52.821 END TEST ubsan 01:11:52.821 ************************************ 01:11:52.821 05:06:35 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 01:11:52.821 05:06:35 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 01:11:52.821 05:06:35 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 01:11:52.821 05:06:35 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 01:11:52.821 05:06:35 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 01:11:52.821 05:06:35 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 01:11:52.821 05:06:35 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 01:11:52.821 05:06:35 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 01:11:52.821 05:06:35 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 01:11:53.079 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 01:11:53.079 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 01:11:53.648 Using 'verbs' RDMA provider 01:12:09.466 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 01:12:24.385 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 01:12:25.328 Creating mk/config.mk...done. 01:12:25.328 Creating mk/cc.flags.mk...done. 01:12:25.328 Type 'make' to build. 01:12:25.328 05:07:07 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 01:12:25.328 05:07:07 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 01:12:25.328 05:07:07 -- common/autotest_common.sh@1111 -- $ xtrace_disable 01:12:25.328 05:07:07 -- common/autotest_common.sh@10 -- $ set +x 01:12:25.328 ************************************ 01:12:25.328 START TEST make 01:12:25.328 ************************************ 01:12:25.328 05:07:07 make -- common/autotest_common.sh@1129 -- $ make -j10 01:12:25.587 make[1]: Nothing to be done for 'all'. 01:12:35.571 The Meson build system 01:12:35.571 Version: 1.5.0 01:12:35.571 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 01:12:35.571 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 01:12:35.571 Build type: native build 01:12:35.571 Program cat found: YES (/usr/bin/cat) 01:12:35.571 Project name: DPDK 01:12:35.571 Project version: 24.03.0 01:12:35.571 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 01:12:35.571 C linker for the host machine: cc ld.bfd 2.40-14 01:12:35.571 Host machine cpu family: x86_64 01:12:35.571 Host machine cpu: x86_64 01:12:35.571 Message: ## Building in Developer Mode ## 01:12:35.571 Program pkg-config found: YES (/usr/bin/pkg-config) 01:12:35.571 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 01:12:35.571 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 01:12:35.571 Program python3 found: YES (/usr/bin/python3) 01:12:35.571 Program cat found: YES (/usr/bin/cat) 01:12:35.571 Compiler for C supports arguments -march=native: YES 01:12:35.571 Checking for size of "void *" : 8 01:12:35.571 Checking for size of "void *" : 8 (cached) 01:12:35.571 Compiler for C supports link arguments -Wl,--undefined-version: YES 01:12:35.571 Library m found: YES 01:12:35.571 Library numa found: YES 01:12:35.571 Has header "numaif.h" : YES 01:12:35.571 Library fdt found: NO 01:12:35.571 Library execinfo found: NO 01:12:35.571 Has header "execinfo.h" : YES 01:12:35.572 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 01:12:35.572 Run-time dependency libarchive found: NO (tried pkgconfig) 01:12:35.572 Run-time dependency libbsd found: NO (tried pkgconfig) 01:12:35.572 Run-time dependency jansson found: NO (tried pkgconfig) 01:12:35.572 Run-time dependency openssl found: YES 3.1.1 01:12:35.572 Run-time dependency libpcap found: YES 1.10.4 01:12:35.572 Has header "pcap.h" with dependency libpcap: YES 01:12:35.572 Compiler for C supports arguments -Wcast-qual: YES 01:12:35.572 Compiler for C supports arguments -Wdeprecated: YES 01:12:35.572 Compiler for C supports arguments -Wformat: YES 01:12:35.572 Compiler for C supports arguments -Wformat-nonliteral: NO 01:12:35.572 Compiler for C supports arguments -Wformat-security: NO 01:12:35.572 Compiler for C supports arguments -Wmissing-declarations: YES 01:12:35.572 Compiler for C supports arguments -Wmissing-prototypes: YES 01:12:35.572 Compiler for C supports arguments -Wnested-externs: YES 01:12:35.572 Compiler for C supports arguments -Wold-style-definition: YES 01:12:35.572 Compiler for C supports arguments -Wpointer-arith: YES 01:12:35.572 Compiler for C supports arguments -Wsign-compare: YES 01:12:35.572 Compiler for C supports arguments -Wstrict-prototypes: YES 01:12:35.572 Compiler for C supports arguments -Wundef: YES 01:12:35.572 Compiler for C supports arguments -Wwrite-strings: YES 01:12:35.572 Compiler for C supports arguments -Wno-address-of-packed-member: YES 01:12:35.572 Compiler for C supports arguments -Wno-packed-not-aligned: YES 01:12:35.572 Compiler for C supports arguments -Wno-missing-field-initializers: YES 01:12:35.572 Compiler for C supports arguments -Wno-zero-length-bounds: YES 01:12:35.572 Program objdump found: YES (/usr/bin/objdump) 01:12:35.572 Compiler for C supports arguments -mavx512f: YES 01:12:35.572 Checking if "AVX512 checking" compiles: YES 01:12:35.572 Fetching value of define "__SSE4_2__" : 1 01:12:35.572 Fetching value of define "__AES__" : 1 01:12:35.572 Fetching value of define "__AVX__" : 1 01:12:35.572 Fetching value of define "__AVX2__" : 1 01:12:35.572 Fetching value of define "__AVX512BW__" : 1 01:12:35.572 Fetching value of define "__AVX512CD__" : 1 01:12:35.572 Fetching value of define "__AVX512DQ__" : 1 01:12:35.572 Fetching value of define "__AVX512F__" : 1 01:12:35.572 Fetching value of define "__AVX512VL__" : 1 01:12:35.572 Fetching value of define "__PCLMUL__" : 1 01:12:35.572 Fetching value of define "__RDRND__" : 1 01:12:35.572 Fetching value of define "__RDSEED__" : 1 01:12:35.572 Fetching value of define "__VPCLMULQDQ__" : (undefined) 01:12:35.572 Fetching value of define "__znver1__" : (undefined) 01:12:35.572 Fetching value of define "__znver2__" : (undefined) 01:12:35.572 Fetching value of define "__znver3__" : (undefined) 01:12:35.572 Fetching value of define "__znver4__" : (undefined) 01:12:35.572 Compiler for C supports arguments -Wno-format-truncation: YES 01:12:35.572 Message: lib/log: Defining dependency "log" 01:12:35.572 Message: lib/kvargs: Defining dependency "kvargs" 01:12:35.572 Message: lib/telemetry: Defining dependency "telemetry" 01:12:35.572 Checking for function "getentropy" : NO 01:12:35.572 Message: lib/eal: Defining dependency "eal" 01:12:35.572 Message: lib/ring: Defining dependency "ring" 01:12:35.572 Message: lib/rcu: Defining dependency "rcu" 01:12:35.572 Message: lib/mempool: Defining dependency "mempool" 01:12:35.572 Message: lib/mbuf: Defining dependency "mbuf" 01:12:35.572 Fetching value of define "__PCLMUL__" : 1 (cached) 01:12:35.572 Fetching value of define "__AVX512F__" : 1 (cached) 01:12:35.572 Fetching value of define "__AVX512BW__" : 1 (cached) 01:12:35.572 Fetching value of define "__AVX512DQ__" : 1 (cached) 01:12:35.572 Fetching value of define "__AVX512VL__" : 1 (cached) 01:12:35.572 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 01:12:35.572 Compiler for C supports arguments -mpclmul: YES 01:12:35.572 Compiler for C supports arguments -maes: YES 01:12:35.572 Compiler for C supports arguments -mavx512f: YES (cached) 01:12:35.572 Compiler for C supports arguments -mavx512bw: YES 01:12:35.572 Compiler for C supports arguments -mavx512dq: YES 01:12:35.572 Compiler for C supports arguments -mavx512vl: YES 01:12:35.572 Compiler for C supports arguments -mvpclmulqdq: YES 01:12:35.572 Compiler for C supports arguments -mavx2: YES 01:12:35.572 Compiler for C supports arguments -mavx: YES 01:12:35.572 Message: lib/net: Defining dependency "net" 01:12:35.572 Message: lib/meter: Defining dependency "meter" 01:12:35.572 Message: lib/ethdev: Defining dependency "ethdev" 01:12:35.572 Message: lib/pci: Defining dependency "pci" 01:12:35.572 Message: lib/cmdline: Defining dependency "cmdline" 01:12:35.572 Message: lib/hash: Defining dependency "hash" 01:12:35.572 Message: lib/timer: Defining dependency "timer" 01:12:35.572 Message: lib/compressdev: Defining dependency "compressdev" 01:12:35.572 Message: lib/cryptodev: Defining dependency "cryptodev" 01:12:35.572 Message: lib/dmadev: Defining dependency "dmadev" 01:12:35.572 Compiler for C supports arguments -Wno-cast-qual: YES 01:12:35.572 Message: lib/power: Defining dependency "power" 01:12:35.572 Message: lib/reorder: Defining dependency "reorder" 01:12:35.572 Message: lib/security: Defining dependency "security" 01:12:35.572 Has header "linux/userfaultfd.h" : YES 01:12:35.572 Has header "linux/vduse.h" : YES 01:12:35.572 Message: lib/vhost: Defining dependency "vhost" 01:12:35.572 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 01:12:35.572 Message: drivers/bus/pci: Defining dependency "bus_pci" 01:12:35.572 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 01:12:35.572 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 01:12:35.572 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 01:12:35.572 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 01:12:35.572 Message: Disabling ml/* drivers: missing internal dependency "mldev" 01:12:35.572 Message: Disabling event/* drivers: missing internal dependency "eventdev" 01:12:35.572 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 01:12:35.572 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 01:12:35.572 Program doxygen found: YES (/usr/local/bin/doxygen) 01:12:35.572 Configuring doxy-api-html.conf using configuration 01:12:35.572 Configuring doxy-api-man.conf using configuration 01:12:35.572 Program mandb found: YES (/usr/bin/mandb) 01:12:35.572 Program sphinx-build found: NO 01:12:35.572 Configuring rte_build_config.h using configuration 01:12:35.572 Message: 01:12:35.572 ================= 01:12:35.572 Applications Enabled 01:12:35.572 ================= 01:12:35.572 01:12:35.572 apps: 01:12:35.572 01:12:35.572 01:12:35.572 Message: 01:12:35.572 ================= 01:12:35.572 Libraries Enabled 01:12:35.572 ================= 01:12:35.572 01:12:35.572 libs: 01:12:35.572 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 01:12:35.572 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 01:12:35.572 cryptodev, dmadev, power, reorder, security, vhost, 01:12:35.572 01:12:35.572 Message: 01:12:35.572 =============== 01:12:35.572 Drivers Enabled 01:12:35.572 =============== 01:12:35.572 01:12:35.572 common: 01:12:35.572 01:12:35.572 bus: 01:12:35.572 pci, vdev, 01:12:35.572 mempool: 01:12:35.572 ring, 01:12:35.572 dma: 01:12:35.572 01:12:35.572 net: 01:12:35.572 01:12:35.572 crypto: 01:12:35.572 01:12:35.572 compress: 01:12:35.572 01:12:35.572 vdpa: 01:12:35.572 01:12:35.572 01:12:35.572 Message: 01:12:35.572 ================= 01:12:35.572 Content Skipped 01:12:35.572 ================= 01:12:35.572 01:12:35.572 apps: 01:12:35.572 dumpcap: explicitly disabled via build config 01:12:35.572 graph: explicitly disabled via build config 01:12:35.572 pdump: explicitly disabled via build config 01:12:35.572 proc-info: explicitly disabled via build config 01:12:35.572 test-acl: explicitly disabled via build config 01:12:35.572 test-bbdev: explicitly disabled via build config 01:12:35.572 test-cmdline: explicitly disabled via build config 01:12:35.572 test-compress-perf: explicitly disabled via build config 01:12:35.572 test-crypto-perf: explicitly disabled via build config 01:12:35.572 test-dma-perf: explicitly disabled via build config 01:12:35.572 test-eventdev: explicitly disabled via build config 01:12:35.572 test-fib: explicitly disabled via build config 01:12:35.572 test-flow-perf: explicitly disabled via build config 01:12:35.573 test-gpudev: explicitly disabled via build config 01:12:35.573 test-mldev: explicitly disabled via build config 01:12:35.573 test-pipeline: explicitly disabled via build config 01:12:35.573 test-pmd: explicitly disabled via build config 01:12:35.573 test-regex: explicitly disabled via build config 01:12:35.573 test-sad: explicitly disabled via build config 01:12:35.573 test-security-perf: explicitly disabled via build config 01:12:35.573 01:12:35.573 libs: 01:12:35.573 argparse: explicitly disabled via build config 01:12:35.573 metrics: explicitly disabled via build config 01:12:35.573 acl: explicitly disabled via build config 01:12:35.573 bbdev: explicitly disabled via build config 01:12:35.573 bitratestats: explicitly disabled via build config 01:12:35.573 bpf: explicitly disabled via build config 01:12:35.573 cfgfile: explicitly disabled via build config 01:12:35.573 distributor: explicitly disabled via build config 01:12:35.573 efd: explicitly disabled via build config 01:12:35.573 eventdev: explicitly disabled via build config 01:12:35.573 dispatcher: explicitly disabled via build config 01:12:35.573 gpudev: explicitly disabled via build config 01:12:35.573 gro: explicitly disabled via build config 01:12:35.573 gso: explicitly disabled via build config 01:12:35.573 ip_frag: explicitly disabled via build config 01:12:35.573 jobstats: explicitly disabled via build config 01:12:35.573 latencystats: explicitly disabled via build config 01:12:35.573 lpm: explicitly disabled via build config 01:12:35.573 member: explicitly disabled via build config 01:12:35.573 pcapng: explicitly disabled via build config 01:12:35.573 rawdev: explicitly disabled via build config 01:12:35.573 regexdev: explicitly disabled via build config 01:12:35.573 mldev: explicitly disabled via build config 01:12:35.573 rib: explicitly disabled via build config 01:12:35.573 sched: explicitly disabled via build config 01:12:35.573 stack: explicitly disabled via build config 01:12:35.573 ipsec: explicitly disabled via build config 01:12:35.573 pdcp: explicitly disabled via build config 01:12:35.573 fib: explicitly disabled via build config 01:12:35.573 port: explicitly disabled via build config 01:12:35.573 pdump: explicitly disabled via build config 01:12:35.573 table: explicitly disabled via build config 01:12:35.573 pipeline: explicitly disabled via build config 01:12:35.573 graph: explicitly disabled via build config 01:12:35.573 node: explicitly disabled via build config 01:12:35.573 01:12:35.573 drivers: 01:12:35.573 common/cpt: not in enabled drivers build config 01:12:35.573 common/dpaax: not in enabled drivers build config 01:12:35.573 common/iavf: not in enabled drivers build config 01:12:35.573 common/idpf: not in enabled drivers build config 01:12:35.573 common/ionic: not in enabled drivers build config 01:12:35.573 common/mvep: not in enabled drivers build config 01:12:35.573 common/octeontx: not in enabled drivers build config 01:12:35.573 bus/auxiliary: not in enabled drivers build config 01:12:35.573 bus/cdx: not in enabled drivers build config 01:12:35.573 bus/dpaa: not in enabled drivers build config 01:12:35.573 bus/fslmc: not in enabled drivers build config 01:12:35.573 bus/ifpga: not in enabled drivers build config 01:12:35.573 bus/platform: not in enabled drivers build config 01:12:35.573 bus/uacce: not in enabled drivers build config 01:12:35.573 bus/vmbus: not in enabled drivers build config 01:12:35.573 common/cnxk: not in enabled drivers build config 01:12:35.573 common/mlx5: not in enabled drivers build config 01:12:35.573 common/nfp: not in enabled drivers build config 01:12:35.573 common/nitrox: not in enabled drivers build config 01:12:35.573 common/qat: not in enabled drivers build config 01:12:35.573 common/sfc_efx: not in enabled drivers build config 01:12:35.573 mempool/bucket: not in enabled drivers build config 01:12:35.573 mempool/cnxk: not in enabled drivers build config 01:12:35.573 mempool/dpaa: not in enabled drivers build config 01:12:35.573 mempool/dpaa2: not in enabled drivers build config 01:12:35.573 mempool/octeontx: not in enabled drivers build config 01:12:35.573 mempool/stack: not in enabled drivers build config 01:12:35.573 dma/cnxk: not in enabled drivers build config 01:12:35.573 dma/dpaa: not in enabled drivers build config 01:12:35.573 dma/dpaa2: not in enabled drivers build config 01:12:35.573 dma/hisilicon: not in enabled drivers build config 01:12:35.573 dma/idxd: not in enabled drivers build config 01:12:35.573 dma/ioat: not in enabled drivers build config 01:12:35.573 dma/skeleton: not in enabled drivers build config 01:12:35.573 net/af_packet: not in enabled drivers build config 01:12:35.573 net/af_xdp: not in enabled drivers build config 01:12:35.573 net/ark: not in enabled drivers build config 01:12:35.573 net/atlantic: not in enabled drivers build config 01:12:35.573 net/avp: not in enabled drivers build config 01:12:35.573 net/axgbe: not in enabled drivers build config 01:12:35.573 net/bnx2x: not in enabled drivers build config 01:12:35.573 net/bnxt: not in enabled drivers build config 01:12:35.573 net/bonding: not in enabled drivers build config 01:12:35.573 net/cnxk: not in enabled drivers build config 01:12:35.573 net/cpfl: not in enabled drivers build config 01:12:35.573 net/cxgbe: not in enabled drivers build config 01:12:35.573 net/dpaa: not in enabled drivers build config 01:12:35.573 net/dpaa2: not in enabled drivers build config 01:12:35.573 net/e1000: not in enabled drivers build config 01:12:35.573 net/ena: not in enabled drivers build config 01:12:35.573 net/enetc: not in enabled drivers build config 01:12:35.573 net/enetfec: not in enabled drivers build config 01:12:35.573 net/enic: not in enabled drivers build config 01:12:35.573 net/failsafe: not in enabled drivers build config 01:12:35.573 net/fm10k: not in enabled drivers build config 01:12:35.573 net/gve: not in enabled drivers build config 01:12:35.573 net/hinic: not in enabled drivers build config 01:12:35.573 net/hns3: not in enabled drivers build config 01:12:35.573 net/i40e: not in enabled drivers build config 01:12:35.573 net/iavf: not in enabled drivers build config 01:12:35.573 net/ice: not in enabled drivers build config 01:12:35.573 net/idpf: not in enabled drivers build config 01:12:35.573 net/igc: not in enabled drivers build config 01:12:35.573 net/ionic: not in enabled drivers build config 01:12:35.573 net/ipn3ke: not in enabled drivers build config 01:12:35.573 net/ixgbe: not in enabled drivers build config 01:12:35.573 net/mana: not in enabled drivers build config 01:12:35.573 net/memif: not in enabled drivers build config 01:12:35.573 net/mlx4: not in enabled drivers build config 01:12:35.573 net/mlx5: not in enabled drivers build config 01:12:35.573 net/mvneta: not in enabled drivers build config 01:12:35.573 net/mvpp2: not in enabled drivers build config 01:12:35.573 net/netvsc: not in enabled drivers build config 01:12:35.573 net/nfb: not in enabled drivers build config 01:12:35.573 net/nfp: not in enabled drivers build config 01:12:35.573 net/ngbe: not in enabled drivers build config 01:12:35.573 net/null: not in enabled drivers build config 01:12:35.573 net/octeontx: not in enabled drivers build config 01:12:35.573 net/octeon_ep: not in enabled drivers build config 01:12:35.573 net/pcap: not in enabled drivers build config 01:12:35.573 net/pfe: not in enabled drivers build config 01:12:35.573 net/qede: not in enabled drivers build config 01:12:35.573 net/ring: not in enabled drivers build config 01:12:35.573 net/sfc: not in enabled drivers build config 01:12:35.573 net/softnic: not in enabled drivers build config 01:12:35.573 net/tap: not in enabled drivers build config 01:12:35.573 net/thunderx: not in enabled drivers build config 01:12:35.573 net/txgbe: not in enabled drivers build config 01:12:35.573 net/vdev_netvsc: not in enabled drivers build config 01:12:35.573 net/vhost: not in enabled drivers build config 01:12:35.573 net/virtio: not in enabled drivers build config 01:12:35.573 net/vmxnet3: not in enabled drivers build config 01:12:35.573 raw/*: missing internal dependency, "rawdev" 01:12:35.573 crypto/armv8: not in enabled drivers build config 01:12:35.573 crypto/bcmfs: not in enabled drivers build config 01:12:35.573 crypto/caam_jr: not in enabled drivers build config 01:12:35.573 crypto/ccp: not in enabled drivers build config 01:12:35.573 crypto/cnxk: not in enabled drivers build config 01:12:35.573 crypto/dpaa_sec: not in enabled drivers build config 01:12:35.573 crypto/dpaa2_sec: not in enabled drivers build config 01:12:35.573 crypto/ipsec_mb: not in enabled drivers build config 01:12:35.573 crypto/mlx5: not in enabled drivers build config 01:12:35.573 crypto/mvsam: not in enabled drivers build config 01:12:35.573 crypto/nitrox: not in enabled drivers build config 01:12:35.573 crypto/null: not in enabled drivers build config 01:12:35.573 crypto/octeontx: not in enabled drivers build config 01:12:35.573 crypto/openssl: not in enabled drivers build config 01:12:35.573 crypto/scheduler: not in enabled drivers build config 01:12:35.573 crypto/uadk: not in enabled drivers build config 01:12:35.573 crypto/virtio: not in enabled drivers build config 01:12:35.573 compress/isal: not in enabled drivers build config 01:12:35.573 compress/mlx5: not in enabled drivers build config 01:12:35.573 compress/nitrox: not in enabled drivers build config 01:12:35.573 compress/octeontx: not in enabled drivers build config 01:12:35.573 compress/zlib: not in enabled drivers build config 01:12:35.573 regex/*: missing internal dependency, "regexdev" 01:12:35.573 ml/*: missing internal dependency, "mldev" 01:12:35.573 vdpa/ifc: not in enabled drivers build config 01:12:35.573 vdpa/mlx5: not in enabled drivers build config 01:12:35.573 vdpa/nfp: not in enabled drivers build config 01:12:35.573 vdpa/sfc: not in enabled drivers build config 01:12:35.573 event/*: missing internal dependency, "eventdev" 01:12:35.573 baseband/*: missing internal dependency, "bbdev" 01:12:35.573 gpu/*: missing internal dependency, "gpudev" 01:12:35.573 01:12:35.573 01:12:35.834 Build targets in project: 85 01:12:35.834 01:12:35.834 DPDK 24.03.0 01:12:35.834 01:12:35.834 User defined options 01:12:35.834 buildtype : debug 01:12:35.834 default_library : shared 01:12:35.834 libdir : lib 01:12:35.834 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 01:12:35.834 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 01:12:35.834 c_link_args : 01:12:35.834 cpu_instruction_set: native 01:12:35.834 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 01:12:35.834 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 01:12:35.834 enable_docs : false 01:12:35.834 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 01:12:35.834 enable_kmods : false 01:12:35.834 max_lcores : 128 01:12:35.834 tests : false 01:12:35.834 01:12:35.834 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 01:12:36.401 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 01:12:36.401 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 01:12:36.401 [2/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 01:12:36.401 [3/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 01:12:36.401 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 01:12:36.401 [5/268] Linking static target lib/librte_kvargs.a 01:12:36.401 [6/268] Linking static target lib/librte_log.a 01:12:36.659 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 01:12:36.659 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 01:12:36.659 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 01:12:36.916 [10/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 01:12:36.916 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 01:12:36.916 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 01:12:36.916 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 01:12:36.916 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 01:12:36.916 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 01:12:36.916 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 01:12:36.916 [17/268] Linking static target lib/librte_telemetry.a 01:12:36.916 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 01:12:37.174 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 01:12:37.433 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 01:12:37.433 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 01:12:37.433 [22/268] Linking target lib/librte_log.so.24.1 01:12:37.433 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 01:12:37.433 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 01:12:37.433 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 01:12:37.433 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 01:12:37.433 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 01:12:37.691 [28/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 01:12:37.691 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 01:12:37.691 [30/268] Linking target lib/librte_kvargs.so.24.1 01:12:37.691 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 01:12:37.691 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 01:12:37.691 [33/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 01:12:37.950 [34/268] Linking target lib/librte_telemetry.so.24.1 01:12:37.950 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 01:12:37.950 [36/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 01:12:37.950 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 01:12:37.950 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 01:12:37.950 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 01:12:37.950 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 01:12:38.209 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 01:12:38.209 [42/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 01:12:38.209 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 01:12:38.209 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 01:12:38.209 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 01:12:38.209 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 01:12:38.209 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 01:12:38.467 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 01:12:38.467 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 01:12:38.467 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 01:12:38.726 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 01:12:38.726 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 01:12:38.726 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 01:12:38.726 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 01:12:38.726 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 01:12:38.726 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 01:12:38.726 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 01:12:38.985 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 01:12:38.985 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 01:12:38.985 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 01:12:38.985 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 01:12:39.243 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 01:12:39.243 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 01:12:39.243 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 01:12:39.243 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 01:12:39.243 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 01:12:39.243 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 01:12:39.243 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 01:12:39.502 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 01:12:39.502 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 01:12:39.502 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 01:12:39.760 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 01:12:39.760 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 01:12:39.760 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 01:12:39.760 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 01:12:39.760 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 01:12:39.760 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 01:12:39.760 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 01:12:39.760 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 01:12:40.018 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 01:12:40.018 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 01:12:40.018 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 01:12:40.277 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 01:12:40.277 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 01:12:40.277 [85/268] Linking static target lib/librte_eal.a 01:12:40.277 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 01:12:40.277 [87/268] Linking static target lib/librte_ring.a 01:12:40.537 [88/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 01:12:40.537 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 01:12:40.537 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 01:12:40.537 [91/268] Linking static target lib/librte_rcu.a 01:12:40.537 [92/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 01:12:40.537 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 01:12:40.537 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 01:12:40.537 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 01:12:40.537 [96/268] Linking static target lib/librte_mempool.a 01:12:40.795 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 01:12:40.795 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 01:12:40.795 [99/268] Linking static target lib/librte_mbuf.a 01:12:40.795 [100/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 01:12:40.795 [101/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 01:12:41.054 [102/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 01:12:41.054 [103/268] Linking static target lib/net/libnet_crc_avx512_lib.a 01:12:41.054 [104/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 01:12:41.054 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 01:12:41.054 [106/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 01:12:41.313 [107/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 01:12:41.313 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 01:12:41.313 [109/268] Linking static target lib/librte_meter.a 01:12:41.313 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 01:12:41.313 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 01:12:41.573 [112/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 01:12:41.573 [113/268] Linking static target lib/librte_net.a 01:12:41.573 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 01:12:41.833 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 01:12:41.833 [116/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 01:12:41.833 [117/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 01:12:41.833 [118/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 01:12:41.833 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 01:12:41.833 [120/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 01:12:42.092 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 01:12:42.092 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 01:12:42.353 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 01:12:42.353 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 01:12:42.353 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 01:12:42.353 [126/268] Linking static target lib/librte_pci.a 01:12:42.353 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 01:12:42.613 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 01:12:42.613 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 01:12:42.613 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 01:12:42.613 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 01:12:42.613 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 01:12:42.613 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 01:12:42.613 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 01:12:42.613 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 01:12:42.613 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 01:12:42.613 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 01:12:42.957 [138/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 01:12:42.957 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 01:12:42.957 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 01:12:42.957 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 01:12:42.957 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 01:12:42.957 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 01:12:42.957 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 01:12:42.957 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 01:12:42.957 [146/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 01:12:42.957 [147/268] Linking static target lib/librte_ethdev.a 01:12:42.957 [148/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 01:12:42.957 [149/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 01:12:42.957 [150/268] Linking static target lib/librte_cmdline.a 01:12:43.214 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 01:12:43.214 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 01:12:43.214 [153/268] Linking static target lib/librte_timer.a 01:12:43.214 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 01:12:43.472 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 01:12:43.472 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 01:12:43.472 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 01:12:43.472 [158/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 01:12:43.730 [159/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 01:12:43.730 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 01:12:43.730 [161/268] Linking static target lib/librte_hash.a 01:12:43.730 [162/268] Linking static target lib/librte_compressdev.a 01:12:43.730 [163/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 01:12:43.987 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 01:12:43.987 [165/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 01:12:43.987 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 01:12:43.987 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 01:12:43.987 [168/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 01:12:43.987 [169/268] Linking static target lib/librte_dmadev.a 01:12:44.244 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 01:12:44.244 [171/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 01:12:44.244 [172/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 01:12:44.502 [173/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 01:12:44.502 [174/268] Linking static target lib/librte_cryptodev.a 01:12:44.502 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 01:12:44.502 [176/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 01:12:44.502 [177/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 01:12:44.761 [178/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 01:12:44.761 [179/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 01:12:44.761 [180/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 01:12:44.761 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 01:12:44.761 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 01:12:45.019 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 01:12:45.019 [184/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 01:12:45.019 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 01:12:45.019 [186/268] Linking static target lib/librte_power.a 01:12:45.277 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 01:12:45.277 [188/268] Linking static target lib/librte_reorder.a 01:12:45.277 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 01:12:45.277 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 01:12:45.277 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 01:12:45.277 [192/268] Linking static target lib/librte_security.a 01:12:45.277 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 01:12:45.536 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 01:12:45.796 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 01:12:46.055 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 01:12:46.055 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 01:12:46.055 [198/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 01:12:46.055 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 01:12:46.055 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 01:12:46.314 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 01:12:46.314 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 01:12:46.573 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 01:12:46.573 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 01:12:46.573 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 01:12:46.573 [206/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 01:12:46.573 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 01:12:46.573 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 01:12:46.573 [209/268] Linking static target drivers/libtmp_rte_bus_pci.a 01:12:46.832 [210/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 01:12:46.832 [211/268] Linking static target drivers/libtmp_rte_bus_vdev.a 01:12:46.832 [212/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 01:12:46.832 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 01:12:46.832 [214/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 01:12:46.832 [215/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 01:12:46.832 [216/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 01:12:46.832 [217/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 01:12:46.832 [218/268] Linking static target drivers/librte_bus_pci.a 01:12:46.832 [219/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 01:12:46.832 [220/268] Linking static target drivers/librte_bus_vdev.a 01:12:47.091 [221/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 01:12:47.091 [222/268] Linking static target drivers/libtmp_rte_mempool_ring.a 01:12:47.091 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 01:12:47.091 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 01:12:47.091 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 01:12:47.091 [226/268] Linking static target drivers/librte_mempool_ring.a 01:12:47.091 [227/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 01:12:47.350 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 01:12:48.288 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 01:12:48.288 [230/268] Linking static target lib/librte_vhost.a 01:12:50.210 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 01:12:50.779 [232/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 01:12:50.779 [233/268] Linking target lib/librte_eal.so.24.1 01:12:51.038 [234/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 01:12:51.038 [235/268] Linking target lib/librte_meter.so.24.1 01:12:51.038 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 01:12:51.038 [237/268] Linking target lib/librte_pci.so.24.1 01:12:51.038 [238/268] Linking target lib/librte_timer.so.24.1 01:12:51.038 [239/268] Linking target lib/librte_ring.so.24.1 01:12:51.038 [240/268] Linking target lib/librte_dmadev.so.24.1 01:12:51.297 [241/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 01:12:51.297 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 01:12:51.297 [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 01:12:51.297 [244/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 01:12:51.297 [245/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 01:12:51.297 [246/268] Linking target lib/librte_rcu.so.24.1 01:12:51.297 [247/268] Linking target drivers/librte_bus_pci.so.24.1 01:12:51.297 [248/268] Linking target lib/librte_mempool.so.24.1 01:12:51.297 [249/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 01:12:51.297 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 01:12:51.297 [251/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 01:12:51.297 [252/268] Linking target lib/librte_mbuf.so.24.1 01:12:51.297 [253/268] Linking target drivers/librte_mempool_ring.so.24.1 01:12:51.556 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 01:12:51.556 [255/268] Linking target lib/librte_reorder.so.24.1 01:12:51.556 [256/268] Linking target lib/librte_net.so.24.1 01:12:51.556 [257/268] Linking target lib/librte_cryptodev.so.24.1 01:12:51.556 [258/268] Linking target lib/librte_compressdev.so.24.1 01:12:51.814 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 01:12:51.814 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 01:12:51.814 [261/268] Linking target lib/librte_hash.so.24.1 01:12:51.814 [262/268] Linking target lib/librte_cmdline.so.24.1 01:12:51.814 [263/268] Linking target lib/librte_security.so.24.1 01:12:51.814 [264/268] Linking target lib/librte_ethdev.so.24.1 01:12:52.073 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 01:12:52.073 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 01:12:52.073 [267/268] Linking target lib/librte_power.so.24.1 01:12:52.073 [268/268] Linking target lib/librte_vhost.so.24.1 01:12:52.073 INFO: autodetecting backend as ninja 01:12:52.073 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 01:13:18.612 CC lib/log/log.o 01:13:18.612 CC lib/ut_mock/mock.o 01:13:18.612 CC lib/log/log_deprecated.o 01:13:18.612 CC lib/log/log_flags.o 01:13:18.612 CC lib/ut/ut.o 01:13:18.612 LIB libspdk_ut_mock.a 01:13:18.612 LIB libspdk_ut.a 01:13:18.612 LIB libspdk_log.a 01:13:18.612 SO libspdk_ut_mock.so.6.0 01:13:18.612 SO libspdk_ut.so.2.0 01:13:18.612 SO libspdk_log.so.7.1 01:13:18.613 SYMLINK libspdk_ut_mock.so 01:13:18.613 SYMLINK libspdk_ut.so 01:13:18.613 SYMLINK libspdk_log.so 01:13:18.613 CC lib/dma/dma.o 01:13:18.613 CC lib/util/base64.o 01:13:18.613 CXX lib/trace_parser/trace.o 01:13:18.613 CC lib/util/bit_array.o 01:13:18.613 CC lib/util/crc16.o 01:13:18.613 CC lib/util/crc32.o 01:13:18.613 CC lib/util/cpuset.o 01:13:18.613 CC lib/util/crc32c.o 01:13:18.613 CC lib/ioat/ioat.o 01:13:18.613 CC lib/vfio_user/host/vfio_user_pci.o 01:13:18.613 CC lib/vfio_user/host/vfio_user.o 01:13:18.613 CC lib/util/crc32_ieee.o 01:13:18.613 CC lib/util/crc64.o 01:13:18.613 CC lib/util/dif.o 01:13:18.613 LIB libspdk_dma.a 01:13:18.613 CC lib/util/fd.o 01:13:18.613 CC lib/util/fd_group.o 01:13:18.613 SO libspdk_dma.so.5.0 01:13:18.613 LIB libspdk_ioat.a 01:13:18.613 SO libspdk_ioat.so.7.0 01:13:18.613 SYMLINK libspdk_dma.so 01:13:18.613 CC lib/util/file.o 01:13:18.613 CC lib/util/hexlify.o 01:13:18.613 CC lib/util/iov.o 01:13:18.613 SYMLINK libspdk_ioat.so 01:13:18.613 CC lib/util/math.o 01:13:18.613 CC lib/util/net.o 01:13:18.613 CC lib/util/pipe.o 01:13:18.613 LIB libspdk_vfio_user.a 01:13:18.613 SO libspdk_vfio_user.so.5.0 01:13:18.613 CC lib/util/strerror_tls.o 01:13:18.613 CC lib/util/string.o 01:13:18.613 CC lib/util/uuid.o 01:13:18.613 CC lib/util/xor.o 01:13:18.613 SYMLINK libspdk_vfio_user.so 01:13:18.613 CC lib/util/zipf.o 01:13:18.613 CC lib/util/md5.o 01:13:18.613 LIB libspdk_util.a 01:13:18.613 SO libspdk_util.so.10.1 01:13:18.613 LIB libspdk_trace_parser.a 01:13:18.613 SO libspdk_trace_parser.so.6.0 01:13:18.613 SYMLINK libspdk_util.so 01:13:18.613 SYMLINK libspdk_trace_parser.so 01:13:18.613 CC lib/vmd/vmd.o 01:13:18.613 CC lib/vmd/led.o 01:13:18.613 CC lib/conf/conf.o 01:13:18.613 CC lib/idxd/idxd.o 01:13:18.613 CC lib/idxd/idxd_user.o 01:13:18.613 CC lib/idxd/idxd_kernel.o 01:13:18.613 CC lib/json/json_parse.o 01:13:18.613 CC lib/json/json_util.o 01:13:18.613 CC lib/rdma_utils/rdma_utils.o 01:13:18.613 CC lib/env_dpdk/env.o 01:13:18.613 CC lib/env_dpdk/memory.o 01:13:18.613 CC lib/env_dpdk/pci.o 01:13:18.613 CC lib/json/json_write.o 01:13:18.613 LIB libspdk_conf.a 01:13:18.613 CC lib/env_dpdk/init.o 01:13:18.613 CC lib/env_dpdk/threads.o 01:13:18.613 SO libspdk_conf.so.6.0 01:13:18.613 SYMLINK libspdk_conf.so 01:13:18.613 LIB libspdk_rdma_utils.a 01:13:18.613 CC lib/env_dpdk/pci_ioat.o 01:13:18.613 SO libspdk_rdma_utils.so.1.0 01:13:18.613 CC lib/env_dpdk/pci_virtio.o 01:13:18.613 SYMLINK libspdk_rdma_utils.so 01:13:18.613 CC lib/env_dpdk/pci_vmd.o 01:13:18.613 LIB libspdk_json.a 01:13:18.613 CC lib/env_dpdk/pci_idxd.o 01:13:18.613 SO libspdk_json.so.6.0 01:13:18.613 LIB libspdk_idxd.a 01:13:18.613 CC lib/env_dpdk/pci_event.o 01:13:18.613 CC lib/env_dpdk/sigbus_handler.o 01:13:18.613 SO libspdk_idxd.so.12.1 01:13:18.613 LIB libspdk_vmd.a 01:13:18.613 SYMLINK libspdk_json.so 01:13:18.613 CC lib/env_dpdk/pci_dpdk.o 01:13:18.613 SO libspdk_vmd.so.6.0 01:13:18.613 SYMLINK libspdk_idxd.so 01:13:18.613 CC lib/env_dpdk/pci_dpdk_2207.o 01:13:18.613 CC lib/env_dpdk/pci_dpdk_2211.o 01:13:18.613 SYMLINK libspdk_vmd.so 01:13:18.613 CC lib/rdma_provider/common.o 01:13:18.613 CC lib/rdma_provider/rdma_provider_verbs.o 01:13:18.613 CC lib/jsonrpc/jsonrpc_client_tcp.o 01:13:18.613 CC lib/jsonrpc/jsonrpc_client.o 01:13:18.613 CC lib/jsonrpc/jsonrpc_server.o 01:13:18.613 CC lib/jsonrpc/jsonrpc_server_tcp.o 01:13:18.613 LIB libspdk_rdma_provider.a 01:13:18.613 SO libspdk_rdma_provider.so.7.0 01:13:18.613 SYMLINK libspdk_rdma_provider.so 01:13:18.613 LIB libspdk_jsonrpc.a 01:13:18.613 SO libspdk_jsonrpc.so.6.0 01:13:18.613 SYMLINK libspdk_jsonrpc.so 01:13:18.613 LIB libspdk_env_dpdk.a 01:13:18.613 SO libspdk_env_dpdk.so.15.1 01:13:18.613 SYMLINK libspdk_env_dpdk.so 01:13:18.613 CC lib/rpc/rpc.o 01:13:18.613 LIB libspdk_rpc.a 01:13:18.613 SO libspdk_rpc.so.6.0 01:13:18.613 SYMLINK libspdk_rpc.so 01:13:18.613 CC lib/notify/notify_rpc.o 01:13:18.613 CC lib/notify/notify.o 01:13:18.613 CC lib/keyring/keyring.o 01:13:18.613 CC lib/keyring/keyring_rpc.o 01:13:18.613 CC lib/trace/trace.o 01:13:18.613 CC lib/trace/trace_rpc.o 01:13:18.613 CC lib/trace/trace_flags.o 01:13:18.613 LIB libspdk_notify.a 01:13:18.613 SO libspdk_notify.so.6.0 01:13:18.872 LIB libspdk_trace.a 01:13:18.872 LIB libspdk_keyring.a 01:13:18.872 SYMLINK libspdk_notify.so 01:13:18.872 SO libspdk_trace.so.11.0 01:13:18.872 SO libspdk_keyring.so.2.0 01:13:18.872 SYMLINK libspdk_keyring.so 01:13:18.872 SYMLINK libspdk_trace.so 01:13:19.441 CC lib/thread/thread.o 01:13:19.441 CC lib/thread/iobuf.o 01:13:19.441 CC lib/sock/sock.o 01:13:19.441 CC lib/sock/sock_rpc.o 01:13:19.700 LIB libspdk_sock.a 01:13:19.700 SO libspdk_sock.so.10.0 01:13:19.700 SYMLINK libspdk_sock.so 01:13:20.268 CC lib/nvme/nvme_ctrlr_cmd.o 01:13:20.268 CC lib/nvme/nvme_ctrlr.o 01:13:20.268 CC lib/nvme/nvme_fabric.o 01:13:20.268 CC lib/nvme/nvme_ns_cmd.o 01:13:20.268 CC lib/nvme/nvme_ns.o 01:13:20.268 CC lib/nvme/nvme_pcie_common.o 01:13:20.268 CC lib/nvme/nvme_pcie.o 01:13:20.268 CC lib/nvme/nvme_qpair.o 01:13:20.268 CC lib/nvme/nvme.o 01:13:20.527 LIB libspdk_thread.a 01:13:20.527 SO libspdk_thread.so.11.0 01:13:20.786 SYMLINK libspdk_thread.so 01:13:20.786 CC lib/nvme/nvme_quirks.o 01:13:20.786 CC lib/nvme/nvme_transport.o 01:13:20.786 CC lib/nvme/nvme_discovery.o 01:13:21.046 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 01:13:21.046 CC lib/nvme/nvme_ns_ocssd_cmd.o 01:13:21.046 CC lib/nvme/nvme_tcp.o 01:13:21.046 CC lib/nvme/nvme_opal.o 01:13:21.046 CC lib/accel/accel.o 01:13:21.304 CC lib/accel/accel_rpc.o 01:13:21.304 CC lib/accel/accel_sw.o 01:13:21.561 CC lib/blob/blobstore.o 01:13:21.561 CC lib/init/json_config.o 01:13:21.561 CC lib/blob/request.o 01:13:21.561 CC lib/blob/zeroes.o 01:13:21.818 CC lib/virtio/virtio.o 01:13:21.818 CC lib/blob/blob_bs_dev.o 01:13:21.818 CC lib/fsdev/fsdev.o 01:13:21.818 CC lib/fsdev/fsdev_io.o 01:13:21.818 CC lib/init/subsystem.o 01:13:21.818 CC lib/nvme/nvme_io_msg.o 01:13:21.818 CC lib/nvme/nvme_poll_group.o 01:13:22.074 CC lib/virtio/virtio_vhost_user.o 01:13:22.074 CC lib/init/subsystem_rpc.o 01:13:22.074 LIB libspdk_accel.a 01:13:22.074 CC lib/fsdev/fsdev_rpc.o 01:13:22.074 SO libspdk_accel.so.16.0 01:13:22.074 CC lib/init/rpc.o 01:13:22.331 SYMLINK libspdk_accel.so 01:13:22.331 CC lib/nvme/nvme_zns.o 01:13:22.331 CC lib/nvme/nvme_stubs.o 01:13:22.331 CC lib/virtio/virtio_vfio_user.o 01:13:22.331 LIB libspdk_init.a 01:13:22.331 LIB libspdk_fsdev.a 01:13:22.331 CC lib/nvme/nvme_auth.o 01:13:22.331 SO libspdk_init.so.6.0 01:13:22.331 SO libspdk_fsdev.so.2.0 01:13:22.331 CC lib/nvme/nvme_cuse.o 01:13:22.331 SYMLINK libspdk_init.so 01:13:22.331 SYMLINK libspdk_fsdev.so 01:13:22.588 CC lib/virtio/virtio_pci.o 01:13:22.588 CC lib/nvme/nvme_rdma.o 01:13:22.588 CC lib/bdev/bdev.o 01:13:22.588 CC lib/event/app.o 01:13:22.588 CC lib/fuse_dispatcher/fuse_dispatcher.o 01:13:22.845 CC lib/event/reactor.o 01:13:22.845 LIB libspdk_virtio.a 01:13:22.845 CC lib/event/log_rpc.o 01:13:22.845 SO libspdk_virtio.so.7.0 01:13:22.845 SYMLINK libspdk_virtio.so 01:13:22.845 CC lib/bdev/bdev_rpc.o 01:13:22.845 CC lib/event/app_rpc.o 01:13:22.845 CC lib/bdev/bdev_zone.o 01:13:23.102 CC lib/bdev/part.o 01:13:23.102 CC lib/bdev/scsi_nvme.o 01:13:23.102 LIB libspdk_fuse_dispatcher.a 01:13:23.102 CC lib/event/scheduler_static.o 01:13:23.102 SO libspdk_fuse_dispatcher.so.1.0 01:13:23.359 SYMLINK libspdk_fuse_dispatcher.so 01:13:23.360 LIB libspdk_event.a 01:13:23.360 SO libspdk_event.so.14.0 01:13:23.664 SYMLINK libspdk_event.so 01:13:23.664 LIB libspdk_nvme.a 01:13:23.923 SO libspdk_nvme.so.15.0 01:13:24.181 LIB libspdk_blob.a 01:13:24.181 SYMLINK libspdk_nvme.so 01:13:24.181 SO libspdk_blob.so.12.0 01:13:24.440 SYMLINK libspdk_blob.so 01:13:24.698 CC lib/lvol/lvol.o 01:13:24.698 CC lib/blobfs/tree.o 01:13:24.698 CC lib/blobfs/blobfs.o 01:13:24.956 LIB libspdk_bdev.a 01:13:25.214 SO libspdk_bdev.so.17.0 01:13:25.214 SYMLINK libspdk_bdev.so 01:13:25.470 LIB libspdk_blobfs.a 01:13:25.470 CC lib/nvmf/ctrlr.o 01:13:25.470 CC lib/nvmf/ctrlr_discovery.o 01:13:25.470 CC lib/ftl/ftl_core.o 01:13:25.470 CC lib/nvmf/ctrlr_bdev.o 01:13:25.470 CC lib/ftl/ftl_init.o 01:13:25.470 CC lib/ublk/ublk.o 01:13:25.470 SO libspdk_blobfs.so.11.0 01:13:25.470 CC lib/nbd/nbd.o 01:13:25.470 CC lib/scsi/dev.o 01:13:25.470 LIB libspdk_lvol.a 01:13:25.470 SYMLINK libspdk_blobfs.so 01:13:25.470 CC lib/nbd/nbd_rpc.o 01:13:25.470 SO libspdk_lvol.so.11.0 01:13:25.728 SYMLINK libspdk_lvol.so 01:13:25.728 CC lib/scsi/lun.o 01:13:25.728 CC lib/ftl/ftl_layout.o 01:13:25.728 CC lib/scsi/port.o 01:13:25.728 CC lib/scsi/scsi.o 01:13:26.001 CC lib/scsi/scsi_bdev.o 01:13:26.001 CC lib/scsi/scsi_pr.o 01:13:26.001 CC lib/nvmf/subsystem.o 01:13:26.001 LIB libspdk_nbd.a 01:13:26.001 CC lib/nvmf/nvmf.o 01:13:26.001 SO libspdk_nbd.so.7.0 01:13:26.001 CC lib/ftl/ftl_debug.o 01:13:26.001 CC lib/ftl/ftl_io.o 01:13:26.001 SYMLINK libspdk_nbd.so 01:13:26.001 CC lib/ftl/ftl_sb.o 01:13:26.001 CC lib/ublk/ublk_rpc.o 01:13:26.260 CC lib/ftl/ftl_l2p.o 01:13:26.260 CC lib/ftl/ftl_l2p_flat.o 01:13:26.260 LIB libspdk_ublk.a 01:13:26.260 CC lib/nvmf/nvmf_rpc.o 01:13:26.260 CC lib/ftl/ftl_nv_cache.o 01:13:26.260 CC lib/nvmf/transport.o 01:13:26.260 SO libspdk_ublk.so.3.0 01:13:26.260 CC lib/scsi/scsi_rpc.o 01:13:26.260 SYMLINK libspdk_ublk.so 01:13:26.260 CC lib/scsi/task.o 01:13:26.518 CC lib/ftl/ftl_band.o 01:13:26.518 CC lib/ftl/ftl_band_ops.o 01:13:26.518 CC lib/ftl/ftl_writer.o 01:13:26.518 LIB libspdk_scsi.a 01:13:26.778 SO libspdk_scsi.so.9.0 01:13:26.778 SYMLINK libspdk_scsi.so 01:13:26.778 CC lib/nvmf/tcp.o 01:13:26.778 CC lib/ftl/ftl_rq.o 01:13:26.778 CC lib/ftl/ftl_reloc.o 01:13:27.036 CC lib/iscsi/conn.o 01:13:27.036 CC lib/vhost/vhost.o 01:13:27.036 CC lib/ftl/ftl_l2p_cache.o 01:13:27.036 CC lib/ftl/ftl_p2l.o 01:13:27.036 CC lib/ftl/ftl_p2l_log.o 01:13:27.036 CC lib/vhost/vhost_rpc.o 01:13:27.036 CC lib/vhost/vhost_scsi.o 01:13:27.293 CC lib/nvmf/stubs.o 01:13:27.293 CC lib/vhost/vhost_blk.o 01:13:27.293 CC lib/ftl/mngt/ftl_mngt.o 01:13:27.293 CC lib/ftl/mngt/ftl_mngt_bdev.o 01:13:27.551 CC lib/iscsi/init_grp.o 01:13:27.551 CC lib/vhost/rte_vhost_user.o 01:13:27.551 CC lib/ftl/mngt/ftl_mngt_shutdown.o 01:13:27.551 CC lib/ftl/mngt/ftl_mngt_startup.o 01:13:27.551 CC lib/iscsi/iscsi.o 01:13:27.808 CC lib/ftl/mngt/ftl_mngt_md.o 01:13:27.808 CC lib/iscsi/param.o 01:13:27.808 CC lib/iscsi/portal_grp.o 01:13:27.808 CC lib/iscsi/tgt_node.o 01:13:27.808 CC lib/iscsi/iscsi_subsystem.o 01:13:28.066 CC lib/ftl/mngt/ftl_mngt_misc.o 01:13:28.066 CC lib/nvmf/mdns_server.o 01:13:28.066 CC lib/nvmf/rdma.o 01:13:28.066 CC lib/nvmf/auth.o 01:13:28.066 CC lib/ftl/mngt/ftl_mngt_ioch.o 01:13:28.324 CC lib/iscsi/iscsi_rpc.o 01:13:28.324 CC lib/ftl/mngt/ftl_mngt_l2p.o 01:13:28.324 CC lib/ftl/mngt/ftl_mngt_band.o 01:13:28.324 CC lib/iscsi/task.o 01:13:28.324 CC lib/ftl/mngt/ftl_mngt_self_test.o 01:13:28.324 CC lib/ftl/mngt/ftl_mngt_p2l.o 01:13:28.583 CC lib/ftl/mngt/ftl_mngt_recovery.o 01:13:28.583 CC lib/ftl/mngt/ftl_mngt_upgrade.o 01:13:28.583 LIB libspdk_vhost.a 01:13:28.583 CC lib/ftl/utils/ftl_conf.o 01:13:28.583 CC lib/ftl/utils/ftl_md.o 01:13:28.583 SO libspdk_vhost.so.8.0 01:13:28.583 CC lib/ftl/utils/ftl_mempool.o 01:13:28.583 CC lib/ftl/utils/ftl_bitmap.o 01:13:28.583 SYMLINK libspdk_vhost.so 01:13:28.583 CC lib/ftl/utils/ftl_property.o 01:13:28.583 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 01:13:28.866 CC lib/ftl/upgrade/ftl_layout_upgrade.o 01:13:28.866 CC lib/ftl/upgrade/ftl_sb_upgrade.o 01:13:28.866 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 01:13:28.866 CC lib/ftl/upgrade/ftl_band_upgrade.o 01:13:28.866 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 01:13:28.866 CC lib/ftl/upgrade/ftl_trim_upgrade.o 01:13:28.866 CC lib/ftl/upgrade/ftl_sb_v3.o 01:13:28.866 CC lib/ftl/upgrade/ftl_sb_v5.o 01:13:28.866 CC lib/ftl/nvc/ftl_nvc_dev.o 01:13:29.123 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 01:13:29.123 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 01:13:29.124 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 01:13:29.124 CC lib/ftl/base/ftl_base_dev.o 01:13:29.124 CC lib/ftl/base/ftl_base_bdev.o 01:13:29.124 LIB libspdk_iscsi.a 01:13:29.124 CC lib/ftl/ftl_trace.o 01:13:29.124 SO libspdk_iscsi.so.8.0 01:13:29.382 SYMLINK libspdk_iscsi.so 01:13:29.382 LIB libspdk_ftl.a 01:13:29.641 SO libspdk_ftl.so.9.0 01:13:29.899 SYMLINK libspdk_ftl.so 01:13:30.158 LIB libspdk_nvmf.a 01:13:30.158 SO libspdk_nvmf.so.20.0 01:13:30.415 SYMLINK libspdk_nvmf.so 01:13:30.981 CC module/env_dpdk/env_dpdk_rpc.o 01:13:30.981 CC module/scheduler/dynamic/scheduler_dynamic.o 01:13:30.981 CC module/scheduler/gscheduler/gscheduler.o 01:13:30.981 CC module/keyring/linux/keyring.o 01:13:30.981 CC module/sock/posix/posix.o 01:13:30.981 CC module/blob/bdev/blob_bdev.o 01:13:30.981 CC module/fsdev/aio/fsdev_aio.o 01:13:30.981 CC module/scheduler/dpdk_governor/dpdk_governor.o 01:13:30.981 CC module/accel/error/accel_error.o 01:13:30.981 CC module/keyring/file/keyring.o 01:13:30.981 LIB libspdk_env_dpdk_rpc.a 01:13:30.981 SO libspdk_env_dpdk_rpc.so.6.0 01:13:30.981 CC module/keyring/linux/keyring_rpc.o 01:13:30.981 SYMLINK libspdk_env_dpdk_rpc.so 01:13:30.981 CC module/accel/error/accel_error_rpc.o 01:13:30.981 LIB libspdk_scheduler_gscheduler.a 01:13:30.981 CC module/keyring/file/keyring_rpc.o 01:13:30.981 LIB libspdk_scheduler_dpdk_governor.a 01:13:30.981 SO libspdk_scheduler_gscheduler.so.4.0 01:13:30.981 SO libspdk_scheduler_dpdk_governor.so.4.0 01:13:31.238 LIB libspdk_scheduler_dynamic.a 01:13:31.238 SO libspdk_scheduler_dynamic.so.4.0 01:13:31.238 SYMLINK libspdk_scheduler_gscheduler.so 01:13:31.238 SYMLINK libspdk_scheduler_dpdk_governor.so 01:13:31.238 LIB libspdk_keyring_linux.a 01:13:31.238 LIB libspdk_blob_bdev.a 01:13:31.238 LIB libspdk_accel_error.a 01:13:31.238 SYMLINK libspdk_scheduler_dynamic.so 01:13:31.238 LIB libspdk_keyring_file.a 01:13:31.238 CC module/fsdev/aio/fsdev_aio_rpc.o 01:13:31.238 SO libspdk_blob_bdev.so.12.0 01:13:31.238 SO libspdk_keyring_linux.so.1.0 01:13:31.238 SO libspdk_keyring_file.so.2.0 01:13:31.238 SO libspdk_accel_error.so.2.0 01:13:31.238 SYMLINK libspdk_blob_bdev.so 01:13:31.238 CC module/accel/ioat/accel_ioat.o 01:13:31.238 SYMLINK libspdk_keyring_linux.so 01:13:31.238 SYMLINK libspdk_accel_error.so 01:13:31.238 CC module/accel/ioat/accel_ioat_rpc.o 01:13:31.238 SYMLINK libspdk_keyring_file.so 01:13:31.238 CC module/accel/dsa/accel_dsa.o 01:13:31.238 CC module/accel/iaa/accel_iaa.o 01:13:31.496 CC module/fsdev/aio/linux_aio_mgr.o 01:13:31.496 LIB libspdk_accel_ioat.a 01:13:31.496 CC module/sock/uring/uring.o 01:13:31.496 SO libspdk_accel_ioat.so.6.0 01:13:31.496 CC module/bdev/delay/vbdev_delay.o 01:13:31.496 CC module/accel/iaa/accel_iaa_rpc.o 01:13:31.496 LIB libspdk_sock_posix.a 01:13:31.496 LIB libspdk_fsdev_aio.a 01:13:31.496 SYMLINK libspdk_accel_ioat.so 01:13:31.496 CC module/blobfs/bdev/blobfs_bdev.o 01:13:31.496 CC module/blobfs/bdev/blobfs_bdev_rpc.o 01:13:31.496 SO libspdk_sock_posix.so.6.0 01:13:31.755 SO libspdk_fsdev_aio.so.1.0 01:13:31.755 CC module/accel/dsa/accel_dsa_rpc.o 01:13:31.755 CC module/bdev/error/vbdev_error.o 01:13:31.755 SYMLINK libspdk_fsdev_aio.so 01:13:31.755 SYMLINK libspdk_sock_posix.so 01:13:31.755 CC module/bdev/delay/vbdev_delay_rpc.o 01:13:31.755 LIB libspdk_accel_iaa.a 01:13:31.755 CC module/bdev/gpt/gpt.o 01:13:31.755 SO libspdk_accel_iaa.so.3.0 01:13:31.755 LIB libspdk_blobfs_bdev.a 01:13:31.755 LIB libspdk_accel_dsa.a 01:13:31.755 SO libspdk_blobfs_bdev.so.6.0 01:13:31.755 SYMLINK libspdk_accel_iaa.so 01:13:31.755 SO libspdk_accel_dsa.so.5.0 01:13:31.755 CC module/bdev/lvol/vbdev_lvol.o 01:13:31.755 CC module/bdev/gpt/vbdev_gpt.o 01:13:32.012 SYMLINK libspdk_blobfs_bdev.so 01:13:32.012 SYMLINK libspdk_accel_dsa.so 01:13:32.012 CC module/bdev/error/vbdev_error_rpc.o 01:13:32.012 CC module/bdev/lvol/vbdev_lvol_rpc.o 01:13:32.012 LIB libspdk_bdev_delay.a 01:13:32.012 SO libspdk_bdev_delay.so.6.0 01:13:32.012 CC module/bdev/malloc/bdev_malloc.o 01:13:32.012 CC module/bdev/null/bdev_null.o 01:13:32.012 SYMLINK libspdk_bdev_delay.so 01:13:32.012 CC module/bdev/passthru/vbdev_passthru.o 01:13:32.012 LIB libspdk_bdev_error.a 01:13:32.012 CC module/bdev/nvme/bdev_nvme.o 01:13:32.012 SO libspdk_bdev_error.so.6.0 01:13:32.271 LIB libspdk_sock_uring.a 01:13:32.271 SO libspdk_sock_uring.so.5.0 01:13:32.271 LIB libspdk_bdev_gpt.a 01:13:32.271 SYMLINK libspdk_bdev_error.so 01:13:32.271 SO libspdk_bdev_gpt.so.6.0 01:13:32.271 CC module/bdev/raid/bdev_raid.o 01:13:32.271 SYMLINK libspdk_sock_uring.so 01:13:32.271 SYMLINK libspdk_bdev_gpt.so 01:13:32.271 CC module/bdev/passthru/vbdev_passthru_rpc.o 01:13:32.271 CC module/bdev/null/bdev_null_rpc.o 01:13:32.271 CC module/bdev/malloc/bdev_malloc_rpc.o 01:13:32.271 CC module/bdev/split/vbdev_split.o 01:13:32.271 LIB libspdk_bdev_lvol.a 01:13:32.271 CC module/bdev/split/vbdev_split_rpc.o 01:13:32.529 SO libspdk_bdev_lvol.so.6.0 01:13:32.529 CC module/bdev/zone_block/vbdev_zone_block.o 01:13:32.529 CC module/bdev/uring/bdev_uring.o 01:13:32.529 LIB libspdk_bdev_passthru.a 01:13:32.529 LIB libspdk_bdev_null.a 01:13:32.529 SYMLINK libspdk_bdev_lvol.so 01:13:32.529 CC module/bdev/uring/bdev_uring_rpc.o 01:13:32.529 SO libspdk_bdev_passthru.so.6.0 01:13:32.529 LIB libspdk_bdev_malloc.a 01:13:32.529 SO libspdk_bdev_null.so.6.0 01:13:32.529 SO libspdk_bdev_malloc.so.6.0 01:13:32.529 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 01:13:32.529 SYMLINK libspdk_bdev_null.so 01:13:32.529 SYMLINK libspdk_bdev_passthru.so 01:13:32.529 SYMLINK libspdk_bdev_malloc.so 01:13:32.529 LIB libspdk_bdev_split.a 01:13:32.529 SO libspdk_bdev_split.so.6.0 01:13:32.787 SYMLINK libspdk_bdev_split.so 01:13:32.787 CC module/bdev/nvme/bdev_nvme_rpc.o 01:13:32.787 CC module/bdev/raid/bdev_raid_rpc.o 01:13:32.787 CC module/bdev/aio/bdev_aio.o 01:13:32.787 LIB libspdk_bdev_zone_block.a 01:13:32.787 CC module/bdev/ftl/bdev_ftl.o 01:13:32.787 CC module/bdev/iscsi/bdev_iscsi.o 01:13:32.787 SO libspdk_bdev_zone_block.so.6.0 01:13:32.787 LIB libspdk_bdev_uring.a 01:13:32.787 CC module/bdev/virtio/bdev_virtio_scsi.o 01:13:32.787 SO libspdk_bdev_uring.so.6.0 01:13:32.787 SYMLINK libspdk_bdev_zone_block.so 01:13:32.787 CC module/bdev/raid/bdev_raid_sb.o 01:13:32.787 SYMLINK libspdk_bdev_uring.so 01:13:32.787 CC module/bdev/ftl/bdev_ftl_rpc.o 01:13:32.787 CC module/bdev/raid/raid0.o 01:13:33.044 CC module/bdev/raid/raid1.o 01:13:33.044 CC module/bdev/aio/bdev_aio_rpc.o 01:13:33.044 LIB libspdk_bdev_ftl.a 01:13:33.044 CC module/bdev/virtio/bdev_virtio_blk.o 01:13:33.044 CC module/bdev/iscsi/bdev_iscsi_rpc.o 01:13:33.045 SO libspdk_bdev_ftl.so.6.0 01:13:33.045 CC module/bdev/nvme/nvme_rpc.o 01:13:33.045 LIB libspdk_bdev_aio.a 01:13:33.045 SYMLINK libspdk_bdev_ftl.so 01:13:33.045 CC module/bdev/nvme/bdev_mdns_client.o 01:13:33.301 SO libspdk_bdev_aio.so.6.0 01:13:33.301 CC module/bdev/virtio/bdev_virtio_rpc.o 01:13:33.301 CC module/bdev/raid/concat.o 01:13:33.301 CC module/bdev/nvme/vbdev_opal.o 01:13:33.301 SYMLINK libspdk_bdev_aio.so 01:13:33.301 LIB libspdk_bdev_iscsi.a 01:13:33.301 CC module/bdev/nvme/vbdev_opal_rpc.o 01:13:33.301 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 01:13:33.301 SO libspdk_bdev_iscsi.so.6.0 01:13:33.301 SYMLINK libspdk_bdev_iscsi.so 01:13:33.559 LIB libspdk_bdev_virtio.a 01:13:33.559 LIB libspdk_bdev_raid.a 01:13:33.559 SO libspdk_bdev_virtio.so.6.0 01:13:33.559 SO libspdk_bdev_raid.so.6.0 01:13:33.559 SYMLINK libspdk_bdev_virtio.so 01:13:33.559 SYMLINK libspdk_bdev_raid.so 01:13:34.495 LIB libspdk_bdev_nvme.a 01:13:34.495 SO libspdk_bdev_nvme.so.7.1 01:13:34.495 SYMLINK libspdk_bdev_nvme.so 01:13:35.062 CC module/event/subsystems/vmd/vmd.o 01:13:35.062 CC module/event/subsystems/vmd/vmd_rpc.o 01:13:35.062 CC module/event/subsystems/sock/sock.o 01:13:35.062 CC module/event/subsystems/fsdev/fsdev.o 01:13:35.062 CC module/event/subsystems/iobuf/iobuf.o 01:13:35.062 CC module/event/subsystems/iobuf/iobuf_rpc.o 01:13:35.062 CC module/event/subsystems/scheduler/scheduler.o 01:13:35.062 CC module/event/subsystems/vhost_blk/vhost_blk.o 01:13:35.062 CC module/event/subsystems/keyring/keyring.o 01:13:35.319 LIB libspdk_event_fsdev.a 01:13:35.319 LIB libspdk_event_scheduler.a 01:13:35.319 LIB libspdk_event_vhost_blk.a 01:13:35.319 LIB libspdk_event_keyring.a 01:13:35.319 LIB libspdk_event_sock.a 01:13:35.319 LIB libspdk_event_vmd.a 01:13:35.319 LIB libspdk_event_iobuf.a 01:13:35.319 SO libspdk_event_scheduler.so.4.0 01:13:35.319 SO libspdk_event_vhost_blk.so.3.0 01:13:35.319 SO libspdk_event_fsdev.so.1.0 01:13:35.319 SO libspdk_event_keyring.so.1.0 01:13:35.319 SO libspdk_event_sock.so.5.0 01:13:35.319 SO libspdk_event_vmd.so.6.0 01:13:35.319 SO libspdk_event_iobuf.so.3.0 01:13:35.319 SYMLINK libspdk_event_vhost_blk.so 01:13:35.319 SYMLINK libspdk_event_fsdev.so 01:13:35.319 SYMLINK libspdk_event_scheduler.so 01:13:35.319 SYMLINK libspdk_event_keyring.so 01:13:35.319 SYMLINK libspdk_event_sock.so 01:13:35.319 SYMLINK libspdk_event_vmd.so 01:13:35.319 SYMLINK libspdk_event_iobuf.so 01:13:35.886 CC module/event/subsystems/accel/accel.o 01:13:35.886 LIB libspdk_event_accel.a 01:13:35.886 SO libspdk_event_accel.so.6.0 01:13:36.145 SYMLINK libspdk_event_accel.so 01:13:36.404 CC module/event/subsystems/bdev/bdev.o 01:13:36.663 LIB libspdk_event_bdev.a 01:13:36.663 SO libspdk_event_bdev.so.6.0 01:13:36.663 SYMLINK libspdk_event_bdev.so 01:13:37.233 CC module/event/subsystems/ublk/ublk.o 01:13:37.233 CC module/event/subsystems/nbd/nbd.o 01:13:37.233 CC module/event/subsystems/nvmf/nvmf_rpc.o 01:13:37.233 CC module/event/subsystems/nvmf/nvmf_tgt.o 01:13:37.233 CC module/event/subsystems/scsi/scsi.o 01:13:37.233 LIB libspdk_event_nbd.a 01:13:37.233 LIB libspdk_event_ublk.a 01:13:37.233 LIB libspdk_event_scsi.a 01:13:37.233 SO libspdk_event_nbd.so.6.0 01:13:37.233 SO libspdk_event_ublk.so.3.0 01:13:37.233 SO libspdk_event_scsi.so.6.0 01:13:37.492 SYMLINK libspdk_event_ublk.so 01:13:37.492 SYMLINK libspdk_event_nbd.so 01:13:37.492 LIB libspdk_event_nvmf.a 01:13:37.492 SYMLINK libspdk_event_scsi.so 01:13:37.492 SO libspdk_event_nvmf.so.6.0 01:13:37.492 SYMLINK libspdk_event_nvmf.so 01:13:37.752 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 01:13:37.752 CC module/event/subsystems/iscsi/iscsi.o 01:13:38.011 LIB libspdk_event_vhost_scsi.a 01:13:38.011 SO libspdk_event_vhost_scsi.so.3.0 01:13:38.012 LIB libspdk_event_iscsi.a 01:13:38.012 SO libspdk_event_iscsi.so.6.0 01:13:38.012 SYMLINK libspdk_event_vhost_scsi.so 01:13:38.012 SYMLINK libspdk_event_iscsi.so 01:13:38.271 SO libspdk.so.6.0 01:13:38.271 SYMLINK libspdk.so 01:13:38.531 CC app/trace_record/trace_record.o 01:13:38.531 CC app/spdk_lspci/spdk_lspci.o 01:13:38.531 CXX app/trace/trace.o 01:13:38.531 CC app/iscsi_tgt/iscsi_tgt.o 01:13:38.531 CC app/nvmf_tgt/nvmf_main.o 01:13:38.531 CC examples/interrupt_tgt/interrupt_tgt.o 01:13:38.790 CC app/spdk_tgt/spdk_tgt.o 01:13:38.790 CC examples/util/zipf/zipf.o 01:13:38.790 CC test/thread/poller_perf/poller_perf.o 01:13:38.790 CC examples/ioat/perf/perf.o 01:13:38.790 LINK spdk_lspci 01:13:38.790 LINK poller_perf 01:13:38.790 LINK iscsi_tgt 01:13:38.790 LINK nvmf_tgt 01:13:38.790 LINK interrupt_tgt 01:13:38.790 LINK spdk_trace_record 01:13:38.790 LINK zipf 01:13:38.790 LINK spdk_tgt 01:13:39.050 LINK ioat_perf 01:13:39.050 LINK spdk_trace 01:13:39.050 CC app/spdk_nvme_perf/perf.o 01:13:39.050 CC examples/ioat/verify/verify.o 01:13:39.050 CC app/spdk_nvme_identify/identify.o 01:13:39.050 CC app/spdk_nvme_discover/discovery_aer.o 01:13:39.050 CC app/spdk_top/spdk_top.o 01:13:39.050 CC test/dma/test_dma/test_dma.o 01:13:39.318 CC app/spdk_dd/spdk_dd.o 01:13:39.318 CC examples/sock/hello_world/hello_sock.o 01:13:39.318 CC examples/thread/thread/thread_ex.o 01:13:39.318 LINK spdk_nvme_discover 01:13:39.318 CC app/fio/nvme/fio_plugin.o 01:13:39.318 LINK verify 01:13:39.595 LINK hello_sock 01:13:39.595 LINK thread 01:13:39.595 LINK spdk_dd 01:13:39.595 CC test/app/bdev_svc/bdev_svc.o 01:13:39.595 LINK test_dma 01:13:39.595 CC examples/vmd/lsvmd/lsvmd.o 01:13:39.855 LINK lsvmd 01:13:39.855 CC test/app/histogram_perf/histogram_perf.o 01:13:39.855 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 01:13:39.855 LINK bdev_svc 01:13:39.855 LINK spdk_nvme 01:13:39.855 LINK spdk_nvme_perf 01:13:39.855 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 01:13:39.855 LINK histogram_perf 01:13:39.855 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 01:13:39.855 LINK spdk_nvme_identify 01:13:40.115 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 01:13:40.115 LINK spdk_top 01:13:40.115 CC examples/vmd/led/led.o 01:13:40.115 CC app/fio/bdev/fio_plugin.o 01:13:40.115 LINK nvme_fuzz 01:13:40.115 CC examples/idxd/perf/perf.o 01:13:40.115 LINK led 01:13:40.115 CC examples/fsdev/hello_world/hello_fsdev.o 01:13:40.375 CC examples/accel/perf/accel_perf.o 01:13:40.375 CC examples/nvme/hello_world/hello_world.o 01:13:40.375 CC examples/blob/hello_world/hello_blob.o 01:13:40.375 CC test/app/jsoncat/jsoncat.o 01:13:40.375 LINK vhost_fuzz 01:13:40.375 CC test/app/stub/stub.o 01:13:40.375 LINK hello_fsdev 01:13:40.375 LINK idxd_perf 01:13:40.634 LINK jsoncat 01:13:40.634 LINK hello_world 01:13:40.634 LINK hello_blob 01:13:40.634 LINK spdk_bdev 01:13:40.634 LINK stub 01:13:40.634 LINK accel_perf 01:13:40.635 CC examples/nvme/reconnect/reconnect.o 01:13:40.635 CC examples/nvme/nvme_manage/nvme_manage.o 01:13:40.893 CC examples/nvme/arbitration/arbitration.o 01:13:40.893 TEST_HEADER include/spdk/accel.h 01:13:40.893 TEST_HEADER include/spdk/accel_module.h 01:13:40.893 TEST_HEADER include/spdk/assert.h 01:13:40.893 TEST_HEADER include/spdk/barrier.h 01:13:40.893 TEST_HEADER include/spdk/base64.h 01:13:40.893 TEST_HEADER include/spdk/bdev.h 01:13:40.893 TEST_HEADER include/spdk/bdev_module.h 01:13:40.893 TEST_HEADER include/spdk/bdev_zone.h 01:13:40.893 TEST_HEADER include/spdk/bit_array.h 01:13:40.893 TEST_HEADER include/spdk/bit_pool.h 01:13:40.893 TEST_HEADER include/spdk/blob_bdev.h 01:13:40.893 TEST_HEADER include/spdk/blobfs_bdev.h 01:13:40.893 TEST_HEADER include/spdk/blobfs.h 01:13:40.893 TEST_HEADER include/spdk/blob.h 01:13:40.893 CC app/vhost/vhost.o 01:13:40.893 TEST_HEADER include/spdk/conf.h 01:13:40.893 TEST_HEADER include/spdk/config.h 01:13:40.893 TEST_HEADER include/spdk/cpuset.h 01:13:40.893 TEST_HEADER include/spdk/crc16.h 01:13:40.893 TEST_HEADER include/spdk/crc32.h 01:13:40.893 TEST_HEADER include/spdk/crc64.h 01:13:40.893 TEST_HEADER include/spdk/dif.h 01:13:40.893 TEST_HEADER include/spdk/dma.h 01:13:40.893 TEST_HEADER include/spdk/endian.h 01:13:40.893 TEST_HEADER include/spdk/env_dpdk.h 01:13:40.893 TEST_HEADER include/spdk/env.h 01:13:40.893 TEST_HEADER include/spdk/event.h 01:13:40.893 CC examples/nvme/hotplug/hotplug.o 01:13:40.893 TEST_HEADER include/spdk/fd_group.h 01:13:40.893 TEST_HEADER include/spdk/fd.h 01:13:40.893 TEST_HEADER include/spdk/file.h 01:13:40.893 TEST_HEADER include/spdk/fsdev.h 01:13:40.893 TEST_HEADER include/spdk/fsdev_module.h 01:13:40.893 TEST_HEADER include/spdk/ftl.h 01:13:40.893 TEST_HEADER include/spdk/fuse_dispatcher.h 01:13:40.893 TEST_HEADER include/spdk/gpt_spec.h 01:13:40.893 TEST_HEADER include/spdk/hexlify.h 01:13:40.893 TEST_HEADER include/spdk/histogram_data.h 01:13:40.893 TEST_HEADER include/spdk/idxd.h 01:13:40.893 TEST_HEADER include/spdk/idxd_spec.h 01:13:40.893 TEST_HEADER include/spdk/init.h 01:13:40.893 TEST_HEADER include/spdk/ioat.h 01:13:40.893 TEST_HEADER include/spdk/ioat_spec.h 01:13:40.893 TEST_HEADER include/spdk/iscsi_spec.h 01:13:40.893 TEST_HEADER include/spdk/json.h 01:13:40.893 TEST_HEADER include/spdk/jsonrpc.h 01:13:40.893 TEST_HEADER include/spdk/keyring.h 01:13:40.893 TEST_HEADER include/spdk/keyring_module.h 01:13:40.893 CC examples/blob/cli/blobcli.o 01:13:40.893 TEST_HEADER include/spdk/likely.h 01:13:40.893 TEST_HEADER include/spdk/log.h 01:13:40.893 TEST_HEADER include/spdk/lvol.h 01:13:40.893 TEST_HEADER include/spdk/md5.h 01:13:40.893 TEST_HEADER include/spdk/memory.h 01:13:40.893 TEST_HEADER include/spdk/mmio.h 01:13:40.893 TEST_HEADER include/spdk/nbd.h 01:13:40.893 TEST_HEADER include/spdk/net.h 01:13:40.893 TEST_HEADER include/spdk/notify.h 01:13:40.893 TEST_HEADER include/spdk/nvme.h 01:13:40.893 TEST_HEADER include/spdk/nvme_intel.h 01:13:40.893 TEST_HEADER include/spdk/nvme_ocssd.h 01:13:40.893 TEST_HEADER include/spdk/nvme_ocssd_spec.h 01:13:40.893 TEST_HEADER include/spdk/nvme_spec.h 01:13:40.893 TEST_HEADER include/spdk/nvme_zns.h 01:13:40.893 TEST_HEADER include/spdk/nvmf_cmd.h 01:13:40.893 TEST_HEADER include/spdk/nvmf_fc_spec.h 01:13:40.893 TEST_HEADER include/spdk/nvmf.h 01:13:40.893 TEST_HEADER include/spdk/nvmf_spec.h 01:13:40.893 TEST_HEADER include/spdk/nvmf_transport.h 01:13:40.893 TEST_HEADER include/spdk/opal.h 01:13:40.893 TEST_HEADER include/spdk/opal_spec.h 01:13:40.893 TEST_HEADER include/spdk/pci_ids.h 01:13:40.893 CC test/event/event_perf/event_perf.o 01:13:40.893 TEST_HEADER include/spdk/pipe.h 01:13:40.893 TEST_HEADER include/spdk/queue.h 01:13:40.893 TEST_HEADER include/spdk/reduce.h 01:13:40.893 TEST_HEADER include/spdk/rpc.h 01:13:40.893 TEST_HEADER include/spdk/scheduler.h 01:13:40.893 TEST_HEADER include/spdk/scsi.h 01:13:40.893 TEST_HEADER include/spdk/scsi_spec.h 01:13:41.152 TEST_HEADER include/spdk/sock.h 01:13:41.152 TEST_HEADER include/spdk/stdinc.h 01:13:41.152 TEST_HEADER include/spdk/string.h 01:13:41.152 TEST_HEADER include/spdk/thread.h 01:13:41.152 CC test/env/mem_callbacks/mem_callbacks.o 01:13:41.152 TEST_HEADER include/spdk/trace.h 01:13:41.152 TEST_HEADER include/spdk/trace_parser.h 01:13:41.152 TEST_HEADER include/spdk/tree.h 01:13:41.152 TEST_HEADER include/spdk/ublk.h 01:13:41.152 TEST_HEADER include/spdk/util.h 01:13:41.152 TEST_HEADER include/spdk/uuid.h 01:13:41.152 TEST_HEADER include/spdk/version.h 01:13:41.152 LINK reconnect 01:13:41.152 TEST_HEADER include/spdk/vfio_user_pci.h 01:13:41.152 TEST_HEADER include/spdk/vfio_user_spec.h 01:13:41.152 TEST_HEADER include/spdk/vhost.h 01:13:41.152 TEST_HEADER include/spdk/vmd.h 01:13:41.152 TEST_HEADER include/spdk/xor.h 01:13:41.152 TEST_HEADER include/spdk/zipf.h 01:13:41.152 CXX test/cpp_headers/accel.o 01:13:41.152 LINK vhost 01:13:41.152 LINK arbitration 01:13:41.152 LINK hotplug 01:13:41.152 LINK event_perf 01:13:41.152 CXX test/cpp_headers/accel_module.o 01:13:41.152 LINK nvme_manage 01:13:41.152 CXX test/cpp_headers/assert.o 01:13:41.152 CXX test/cpp_headers/barrier.o 01:13:41.409 CC test/event/reactor/reactor.o 01:13:41.409 CXX test/cpp_headers/base64.o 01:13:41.409 LINK blobcli 01:13:41.409 CC examples/nvme/cmb_copy/cmb_copy.o 01:13:41.409 CC test/event/reactor_perf/reactor_perf.o 01:13:41.409 LINK iscsi_fuzz 01:13:41.409 LINK reactor 01:13:41.409 CC test/env/vtophys/vtophys.o 01:13:41.409 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 01:13:41.409 CXX test/cpp_headers/bdev.o 01:13:41.409 CXX test/cpp_headers/bdev_module.o 01:13:41.668 CC test/event/app_repeat/app_repeat.o 01:13:41.668 LINK mem_callbacks 01:13:41.668 LINK reactor_perf 01:13:41.668 CXX test/cpp_headers/bdev_zone.o 01:13:41.668 LINK cmb_copy 01:13:41.668 LINK vtophys 01:13:41.668 LINK env_dpdk_post_init 01:13:41.668 CXX test/cpp_headers/bit_array.o 01:13:41.668 LINK app_repeat 01:13:41.926 CC examples/nvme/abort/abort.o 01:13:41.926 CC examples/nvme/pmr_persistence/pmr_persistence.o 01:13:41.926 CC test/event/scheduler/scheduler.o 01:13:41.926 CC test/nvme/aer/aer.o 01:13:41.926 CC test/env/memory/memory_ut.o 01:13:41.926 CC test/env/pci/pci_ut.o 01:13:41.926 CC test/nvme/sgl/sgl.o 01:13:41.926 CC test/nvme/reset/reset.o 01:13:41.926 CXX test/cpp_headers/bit_pool.o 01:13:41.926 CC test/nvme/e2edp/nvme_dp.o 01:13:41.926 LINK pmr_persistence 01:13:41.926 CXX test/cpp_headers/blob_bdev.o 01:13:42.183 LINK scheduler 01:13:42.183 LINK aer 01:13:42.183 LINK sgl 01:13:42.183 LINK reset 01:13:42.183 LINK abort 01:13:42.183 CXX test/cpp_headers/blobfs_bdev.o 01:13:42.183 LINK pci_ut 01:13:42.183 LINK nvme_dp 01:13:42.183 CC test/nvme/overhead/overhead.o 01:13:42.440 CC test/nvme/err_injection/err_injection.o 01:13:42.440 CC test/nvme/startup/startup.o 01:13:42.440 CC test/nvme/reserve/reserve.o 01:13:42.440 CXX test/cpp_headers/blobfs.o 01:13:42.440 CC test/nvme/simple_copy/simple_copy.o 01:13:42.440 CC examples/bdev/hello_world/hello_bdev.o 01:13:42.440 LINK err_injection 01:13:42.440 LINK startup 01:13:42.698 LINK overhead 01:13:42.698 CXX test/cpp_headers/blob.o 01:13:42.698 CC examples/bdev/bdevperf/bdevperf.o 01:13:42.698 CC test/nvme/connect_stress/connect_stress.o 01:13:42.698 LINK reserve 01:13:42.698 LINK simple_copy 01:13:42.698 LINK hello_bdev 01:13:42.698 CXX test/cpp_headers/conf.o 01:13:42.698 LINK connect_stress 01:13:42.698 CC test/rpc_client/rpc_client_test.o 01:13:42.698 CXX test/cpp_headers/config.o 01:13:42.698 CC test/nvme/boot_partition/boot_partition.o 01:13:42.957 CC test/nvme/compliance/nvme_compliance.o 01:13:42.957 CC test/nvme/fused_ordering/fused_ordering.o 01:13:42.957 CXX test/cpp_headers/cpuset.o 01:13:42.957 LINK rpc_client_test 01:13:42.957 CC test/nvme/doorbell_aers/doorbell_aers.o 01:13:42.957 LINK boot_partition 01:13:42.957 CC test/nvme/fdp/fdp.o 01:13:42.957 CXX test/cpp_headers/crc16.o 01:13:42.957 LINK memory_ut 01:13:42.957 LINK fused_ordering 01:13:43.215 LINK doorbell_aers 01:13:43.215 LINK nvme_compliance 01:13:43.215 CC test/accel/dif/dif.o 01:13:43.215 CXX test/cpp_headers/crc32.o 01:13:43.215 CXX test/cpp_headers/crc64.o 01:13:43.215 LINK bdevperf 01:13:43.215 CC test/nvme/cuse/cuse.o 01:13:43.215 CC test/blobfs/mkfs/mkfs.o 01:13:43.215 CXX test/cpp_headers/dif.o 01:13:43.472 CXX test/cpp_headers/dma.o 01:13:43.472 LINK fdp 01:13:43.472 CXX test/cpp_headers/endian.o 01:13:43.472 CXX test/cpp_headers/env_dpdk.o 01:13:43.472 LINK mkfs 01:13:43.472 CC test/lvol/esnap/esnap.o 01:13:43.472 CXX test/cpp_headers/env.o 01:13:43.472 CXX test/cpp_headers/event.o 01:13:43.472 CXX test/cpp_headers/fd_group.o 01:13:43.472 CXX test/cpp_headers/fd.o 01:13:43.472 CXX test/cpp_headers/file.o 01:13:43.730 CC examples/nvmf/nvmf/nvmf.o 01:13:43.730 CXX test/cpp_headers/fsdev.o 01:13:43.730 CXX test/cpp_headers/fsdev_module.o 01:13:43.730 CXX test/cpp_headers/ftl.o 01:13:43.730 CXX test/cpp_headers/fuse_dispatcher.o 01:13:43.730 CXX test/cpp_headers/gpt_spec.o 01:13:43.730 LINK dif 01:13:43.730 CXX test/cpp_headers/hexlify.o 01:13:43.730 CXX test/cpp_headers/histogram_data.o 01:13:43.730 CXX test/cpp_headers/idxd.o 01:13:43.730 CXX test/cpp_headers/idxd_spec.o 01:13:43.988 CXX test/cpp_headers/init.o 01:13:43.988 CXX test/cpp_headers/ioat.o 01:13:43.988 LINK nvmf 01:13:43.988 CXX test/cpp_headers/ioat_spec.o 01:13:43.988 CXX test/cpp_headers/iscsi_spec.o 01:13:43.988 CXX test/cpp_headers/json.o 01:13:43.989 CXX test/cpp_headers/jsonrpc.o 01:13:43.989 CXX test/cpp_headers/keyring.o 01:13:43.989 CXX test/cpp_headers/keyring_module.o 01:13:44.247 CXX test/cpp_headers/likely.o 01:13:44.247 CXX test/cpp_headers/log.o 01:13:44.247 CXX test/cpp_headers/lvol.o 01:13:44.247 CXX test/cpp_headers/md5.o 01:13:44.247 CXX test/cpp_headers/memory.o 01:13:44.247 CC test/bdev/bdevio/bdevio.o 01:13:44.247 CXX test/cpp_headers/mmio.o 01:13:44.247 CXX test/cpp_headers/nbd.o 01:13:44.247 CXX test/cpp_headers/net.o 01:13:44.247 CXX test/cpp_headers/notify.o 01:13:44.247 CXX test/cpp_headers/nvme.o 01:13:44.247 CXX test/cpp_headers/nvme_intel.o 01:13:44.247 CXX test/cpp_headers/nvme_ocssd.o 01:13:44.247 CXX test/cpp_headers/nvme_ocssd_spec.o 01:13:44.505 CXX test/cpp_headers/nvme_spec.o 01:13:44.505 CXX test/cpp_headers/nvme_zns.o 01:13:44.505 CXX test/cpp_headers/nvmf_cmd.o 01:13:44.505 CXX test/cpp_headers/nvmf_fc_spec.o 01:13:44.505 CXX test/cpp_headers/nvmf.o 01:13:44.505 CXX test/cpp_headers/nvmf_spec.o 01:13:44.505 LINK cuse 01:13:44.505 LINK bdevio 01:13:44.505 CXX test/cpp_headers/nvmf_transport.o 01:13:44.505 CXX test/cpp_headers/opal.o 01:13:44.505 CXX test/cpp_headers/opal_spec.o 01:13:44.764 CXX test/cpp_headers/pci_ids.o 01:13:44.764 CXX test/cpp_headers/pipe.o 01:13:44.764 CXX test/cpp_headers/queue.o 01:13:44.764 CXX test/cpp_headers/reduce.o 01:13:44.764 CXX test/cpp_headers/rpc.o 01:13:44.764 CXX test/cpp_headers/scheduler.o 01:13:44.764 CXX test/cpp_headers/scsi.o 01:13:44.764 CXX test/cpp_headers/scsi_spec.o 01:13:44.764 CXX test/cpp_headers/sock.o 01:13:44.764 CXX test/cpp_headers/stdinc.o 01:13:44.764 CXX test/cpp_headers/string.o 01:13:44.764 CXX test/cpp_headers/thread.o 01:13:44.764 CXX test/cpp_headers/trace.o 01:13:44.764 CXX test/cpp_headers/trace_parser.o 01:13:44.764 CXX test/cpp_headers/tree.o 01:13:44.764 CXX test/cpp_headers/ublk.o 01:13:44.764 CXX test/cpp_headers/util.o 01:13:44.764 CXX test/cpp_headers/uuid.o 01:13:45.023 CXX test/cpp_headers/version.o 01:13:45.023 CXX test/cpp_headers/vfio_user_pci.o 01:13:45.023 CXX test/cpp_headers/vfio_user_spec.o 01:13:45.023 CXX test/cpp_headers/vhost.o 01:13:45.023 CXX test/cpp_headers/vmd.o 01:13:45.023 CXX test/cpp_headers/xor.o 01:13:45.023 CXX test/cpp_headers/zipf.o 01:13:48.352 LINK esnap 01:13:48.352 01:13:48.352 real 1m22.923s 01:13:48.352 user 6m51.688s 01:13:48.352 sys 1m38.033s 01:13:48.353 05:08:30 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 01:13:48.353 05:08:30 make -- common/autotest_common.sh@10 -- $ set +x 01:13:48.353 ************************************ 01:13:48.353 END TEST make 01:13:48.353 ************************************ 01:13:48.353 05:08:30 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 01:13:48.353 05:08:30 -- pm/common@29 -- $ signal_monitor_resources TERM 01:13:48.353 05:08:30 -- pm/common@40 -- $ local monitor pid pids signal=TERM 01:13:48.353 05:08:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 01:13:48.353 05:08:30 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 01:13:48.353 05:08:30 -- pm/common@44 -- $ pid=5465 01:13:48.353 05:08:30 -- pm/common@50 -- $ kill -TERM 5465 01:13:48.353 05:08:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 01:13:48.353 05:08:30 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 01:13:48.353 05:08:30 -- pm/common@44 -- $ pid=5467 01:13:48.353 05:08:30 -- pm/common@50 -- $ kill -TERM 5467 01:13:48.353 05:08:30 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 01:13:48.353 05:08:30 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 01:13:48.353 05:08:30 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:13:48.353 05:08:30 -- common/autotest_common.sh@1693 -- # lcov --version 01:13:48.353 05:08:30 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:13:48.353 05:08:30 -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:13:48.353 05:08:30 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:13:48.353 05:08:30 -- scripts/common.sh@333 -- # local ver1 ver1_l 01:13:48.353 05:08:30 -- scripts/common.sh@334 -- # local ver2 ver2_l 01:13:48.353 05:08:30 -- scripts/common.sh@336 -- # IFS=.-: 01:13:48.353 05:08:30 -- scripts/common.sh@336 -- # read -ra ver1 01:13:48.353 05:08:30 -- scripts/common.sh@337 -- # IFS=.-: 01:13:48.353 05:08:30 -- scripts/common.sh@337 -- # read -ra ver2 01:13:48.353 05:08:30 -- scripts/common.sh@338 -- # local 'op=<' 01:13:48.353 05:08:30 -- scripts/common.sh@340 -- # ver1_l=2 01:13:48.353 05:08:30 -- scripts/common.sh@341 -- # ver2_l=1 01:13:48.353 05:08:30 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:13:48.353 05:08:30 -- scripts/common.sh@344 -- # case "$op" in 01:13:48.353 05:08:30 -- scripts/common.sh@345 -- # : 1 01:13:48.353 05:08:30 -- scripts/common.sh@364 -- # (( v = 0 )) 01:13:48.353 05:08:30 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:13:48.353 05:08:30 -- scripts/common.sh@365 -- # decimal 1 01:13:48.353 05:08:30 -- scripts/common.sh@353 -- # local d=1 01:13:48.353 05:08:30 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:13:48.353 05:08:30 -- scripts/common.sh@355 -- # echo 1 01:13:48.353 05:08:30 -- scripts/common.sh@365 -- # ver1[v]=1 01:13:48.353 05:08:30 -- scripts/common.sh@366 -- # decimal 2 01:13:48.353 05:08:30 -- scripts/common.sh@353 -- # local d=2 01:13:48.353 05:08:30 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:13:48.353 05:08:30 -- scripts/common.sh@355 -- # echo 2 01:13:48.353 05:08:30 -- scripts/common.sh@366 -- # ver2[v]=2 01:13:48.353 05:08:30 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:13:48.353 05:08:30 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:13:48.353 05:08:30 -- scripts/common.sh@368 -- # return 0 01:13:48.353 05:08:30 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:13:48.353 05:08:30 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:13:48.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:13:48.353 --rc genhtml_branch_coverage=1 01:13:48.353 --rc genhtml_function_coverage=1 01:13:48.353 --rc genhtml_legend=1 01:13:48.353 --rc geninfo_all_blocks=1 01:13:48.353 --rc geninfo_unexecuted_blocks=1 01:13:48.353 01:13:48.353 ' 01:13:48.353 05:08:30 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:13:48.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:13:48.353 --rc genhtml_branch_coverage=1 01:13:48.353 --rc genhtml_function_coverage=1 01:13:48.353 --rc genhtml_legend=1 01:13:48.353 --rc geninfo_all_blocks=1 01:13:48.353 --rc geninfo_unexecuted_blocks=1 01:13:48.353 01:13:48.353 ' 01:13:48.353 05:08:30 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:13:48.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:13:48.353 --rc genhtml_branch_coverage=1 01:13:48.353 --rc genhtml_function_coverage=1 01:13:48.353 --rc genhtml_legend=1 01:13:48.353 --rc geninfo_all_blocks=1 01:13:48.353 --rc geninfo_unexecuted_blocks=1 01:13:48.353 01:13:48.353 ' 01:13:48.353 05:08:30 -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:13:48.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:13:48.353 --rc genhtml_branch_coverage=1 01:13:48.353 --rc genhtml_function_coverage=1 01:13:48.353 --rc genhtml_legend=1 01:13:48.353 --rc geninfo_all_blocks=1 01:13:48.353 --rc geninfo_unexecuted_blocks=1 01:13:48.353 01:13:48.353 ' 01:13:48.353 05:08:30 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:13:48.353 05:08:30 -- nvmf/common.sh@7 -- # uname -s 01:13:48.353 05:08:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:13:48.353 05:08:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:13:48.353 05:08:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:13:48.353 05:08:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:13:48.353 05:08:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:13:48.353 05:08:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:13:48.353 05:08:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:13:48.353 05:08:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:13:48.353 05:08:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:13:48.353 05:08:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:13:48.353 05:08:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:13:48.353 05:08:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:13:48.353 05:08:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:13:48.353 05:08:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:13:48.353 05:08:30 -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:13:48.353 05:08:30 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:13:48.353 05:08:30 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:13:48.353 05:08:30 -- scripts/common.sh@15 -- # shopt -s extglob 01:13:48.353 05:08:30 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:13:48.353 05:08:30 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:13:48.353 05:08:30 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:13:48.353 05:08:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:13:48.353 05:08:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:13:48.353 05:08:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:13:48.353 05:08:30 -- paths/export.sh@5 -- # export PATH 01:13:48.353 05:08:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:13:48.353 05:08:30 -- nvmf/common.sh@51 -- # : 0 01:13:48.353 05:08:30 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:13:48.353 05:08:30 -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:13:48.353 05:08:30 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:13:48.353 05:08:30 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:13:48.353 05:08:30 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:13:48.353 05:08:30 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:13:48.353 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:13:48.353 05:08:30 -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:13:48.353 05:08:30 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:13:48.353 05:08:30 -- nvmf/common.sh@55 -- # have_pci_nics=0 01:13:48.353 05:08:30 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 01:13:48.353 05:08:30 -- spdk/autotest.sh@32 -- # uname -s 01:13:48.353 05:08:30 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 01:13:48.353 05:08:30 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 01:13:48.353 05:08:30 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 01:13:48.353 05:08:30 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 01:13:48.353 05:08:30 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 01:13:48.353 05:08:30 -- spdk/autotest.sh@44 -- # modprobe nbd 01:13:48.613 05:08:30 -- spdk/autotest.sh@46 -- # type -P udevadm 01:13:48.613 05:08:30 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 01:13:48.613 05:08:30 -- spdk/autotest.sh@48 -- # udevadm_pid=54553 01:13:48.613 05:08:30 -- spdk/autotest.sh@53 -- # start_monitor_resources 01:13:48.613 05:08:30 -- pm/common@17 -- # local monitor 01:13:48.613 05:08:30 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 01:13:48.613 05:08:30 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 01:13:48.613 05:08:30 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 01:13:48.613 05:08:30 -- pm/common@25 -- # sleep 1 01:13:48.613 05:08:30 -- pm/common@21 -- # date +%s 01:13:48.613 05:08:30 -- pm/common@21 -- # date +%s 01:13:48.613 05:08:30 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733720910 01:13:48.613 05:08:30 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733720910 01:13:48.613 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733720910_collect-vmstat.pm.log 01:13:48.613 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733720910_collect-cpu-load.pm.log 01:13:49.552 05:08:31 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 01:13:49.552 05:08:31 -- spdk/autotest.sh@57 -- # timing_enter autotest 01:13:49.552 05:08:31 -- common/autotest_common.sh@726 -- # xtrace_disable 01:13:49.552 05:08:31 -- common/autotest_common.sh@10 -- # set +x 01:13:49.552 05:08:31 -- spdk/autotest.sh@59 -- # create_test_list 01:13:49.552 05:08:31 -- common/autotest_common.sh@752 -- # xtrace_disable 01:13:49.552 05:08:31 -- common/autotest_common.sh@10 -- # set +x 01:13:49.552 05:08:31 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 01:13:49.552 05:08:31 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 01:13:49.552 05:08:31 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 01:13:49.552 05:08:31 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 01:13:49.552 05:08:31 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 01:13:49.552 05:08:31 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 01:13:49.552 05:08:31 -- common/autotest_common.sh@1457 -- # uname 01:13:49.552 05:08:31 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 01:13:49.552 05:08:31 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 01:13:49.552 05:08:31 -- common/autotest_common.sh@1477 -- # uname 01:13:49.552 05:08:31 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 01:13:49.552 05:08:31 -- spdk/autotest.sh@68 -- # [[ y == y ]] 01:13:49.552 05:08:31 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 01:13:49.552 lcov: LCOV version 1.15 01:13:49.552 05:08:31 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 01:14:04.534 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 01:14:04.534 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 01:14:19.423 05:09:01 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 01:14:19.423 05:09:01 -- common/autotest_common.sh@726 -- # xtrace_disable 01:14:19.423 05:09:01 -- common/autotest_common.sh@10 -- # set +x 01:14:19.423 05:09:01 -- spdk/autotest.sh@78 -- # rm -f 01:14:19.423 05:09:01 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 01:14:19.423 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:14:19.423 0000:00:11.0 (1b36 0010): Already using the nvme driver 01:14:19.684 0000:00:10.0 (1b36 0010): Already using the nvme driver 01:14:19.684 05:09:01 -- spdk/autotest.sh@83 -- # get_zoned_devs 01:14:19.684 05:09:01 -- common/autotest_common.sh@1657 -- # zoned_devs=() 01:14:19.684 05:09:01 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 01:14:19.684 05:09:01 -- common/autotest_common.sh@1658 -- # local nvme bdf 01:14:19.684 05:09:01 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 01:14:19.684 05:09:01 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 01:14:19.684 05:09:01 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 01:14:19.684 05:09:01 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 01:14:19.684 05:09:01 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:14:19.684 05:09:01 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 01:14:19.684 05:09:01 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 01:14:19.684 05:09:01 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 01:14:19.684 05:09:01 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 01:14:19.684 05:09:01 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:14:19.684 05:09:01 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 01:14:19.684 05:09:01 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 01:14:19.684 05:09:01 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 01:14:19.684 05:09:01 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 01:14:19.684 05:09:01 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:14:19.684 05:09:01 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 01:14:19.684 05:09:01 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 01:14:19.684 05:09:01 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 01:14:19.684 05:09:01 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 01:14:19.684 05:09:01 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:14:19.684 05:09:01 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 01:14:19.684 05:09:01 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 01:14:19.684 05:09:01 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 01:14:19.684 05:09:01 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 01:14:19.684 05:09:01 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 01:14:19.684 05:09:01 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 01:14:19.684 No valid GPT data, bailing 01:14:19.684 05:09:01 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 01:14:19.684 05:09:01 -- scripts/common.sh@394 -- # pt= 01:14:19.684 05:09:01 -- scripts/common.sh@395 -- # return 1 01:14:19.684 05:09:01 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 01:14:19.684 1+0 records in 01:14:19.684 1+0 records out 01:14:19.684 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00717111 s, 146 MB/s 01:14:19.684 05:09:02 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 01:14:19.684 05:09:02 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 01:14:19.684 05:09:02 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 01:14:19.684 05:09:02 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 01:14:19.684 05:09:02 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 01:14:19.684 No valid GPT data, bailing 01:14:19.684 05:09:02 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 01:14:19.684 05:09:02 -- scripts/common.sh@394 -- # pt= 01:14:19.684 05:09:02 -- scripts/common.sh@395 -- # return 1 01:14:19.684 05:09:02 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 01:14:19.684 1+0 records in 01:14:19.684 1+0 records out 01:14:19.684 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00720839 s, 145 MB/s 01:14:19.684 05:09:02 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 01:14:19.684 05:09:02 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 01:14:19.684 05:09:02 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 01:14:19.684 05:09:02 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 01:14:19.684 05:09:02 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 01:14:19.944 No valid GPT data, bailing 01:14:19.944 05:09:02 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 01:14:19.944 05:09:02 -- scripts/common.sh@394 -- # pt= 01:14:19.944 05:09:02 -- scripts/common.sh@395 -- # return 1 01:14:19.944 05:09:02 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 01:14:19.944 1+0 records in 01:14:19.944 1+0 records out 01:14:19.944 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0074354 s, 141 MB/s 01:14:19.944 05:09:02 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 01:14:19.944 05:09:02 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 01:14:19.944 05:09:02 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 01:14:19.944 05:09:02 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 01:14:19.944 05:09:02 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 01:14:19.944 No valid GPT data, bailing 01:14:19.944 05:09:02 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 01:14:19.944 05:09:02 -- scripts/common.sh@394 -- # pt= 01:14:19.944 05:09:02 -- scripts/common.sh@395 -- # return 1 01:14:19.944 05:09:02 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 01:14:19.944 1+0 records in 01:14:19.944 1+0 records out 01:14:19.944 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00643725 s, 163 MB/s 01:14:19.944 05:09:02 -- spdk/autotest.sh@105 -- # sync 01:14:19.944 05:09:02 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 01:14:19.944 05:09:02 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 01:14:19.944 05:09:02 -- common/autotest_common.sh@22 -- # reap_spdk_processes 01:14:22.478 05:09:04 -- spdk/autotest.sh@111 -- # uname -s 01:14:22.478 05:09:04 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 01:14:22.478 05:09:04 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 01:14:22.478 05:09:04 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 01:14:23.045 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:14:23.045 Hugepages 01:14:23.045 node hugesize free / total 01:14:23.045 node0 1048576kB 0 / 0 01:14:23.045 node0 2048kB 0 / 0 01:14:23.045 01:14:23.045 Type BDF Vendor Device NUMA Driver Device Block devices 01:14:23.303 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 01:14:23.303 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 01:14:23.303 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 01:14:23.303 05:09:05 -- spdk/autotest.sh@117 -- # uname -s 01:14:23.561 05:09:05 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 01:14:23.561 05:09:05 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 01:14:23.561 05:09:05 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:14:24.129 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:14:24.388 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 01:14:24.388 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 01:14:24.388 05:09:06 -- common/autotest_common.sh@1517 -- # sleep 1 01:14:25.768 05:09:07 -- common/autotest_common.sh@1518 -- # bdfs=() 01:14:25.768 05:09:07 -- common/autotest_common.sh@1518 -- # local bdfs 01:14:25.768 05:09:07 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 01:14:25.768 05:09:07 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 01:14:25.768 05:09:07 -- common/autotest_common.sh@1498 -- # bdfs=() 01:14:25.768 05:09:07 -- common/autotest_common.sh@1498 -- # local bdfs 01:14:25.768 05:09:07 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 01:14:25.768 05:09:07 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 01:14:25.768 05:09:07 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 01:14:25.768 05:09:07 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 01:14:25.768 05:09:07 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 01:14:25.768 05:09:07 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 01:14:26.027 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:14:26.027 Waiting for block devices as requested 01:14:26.027 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 01:14:26.288 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 01:14:26.288 05:09:08 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 01:14:26.288 05:09:08 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 01:14:26.288 05:09:08 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 01:14:26.288 05:09:08 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 01:14:26.288 05:09:08 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 01:14:26.288 05:09:08 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 01:14:26.288 05:09:08 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 01:14:26.288 05:09:08 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 01:14:26.288 05:09:08 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 01:14:26.288 05:09:08 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 01:14:26.288 05:09:08 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 01:14:26.288 05:09:08 -- common/autotest_common.sh@1531 -- # grep oacs 01:14:26.288 05:09:08 -- common/autotest_common.sh@1531 -- # cut -d: -f2 01:14:26.288 05:09:08 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 01:14:26.288 05:09:08 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 01:14:26.288 05:09:08 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 01:14:26.288 05:09:08 -- common/autotest_common.sh@1540 -- # grep unvmcap 01:14:26.288 05:09:08 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 01:14:26.288 05:09:08 -- common/autotest_common.sh@1540 -- # cut -d: -f2 01:14:26.288 05:09:08 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 01:14:26.288 05:09:08 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 01:14:26.288 05:09:08 -- common/autotest_common.sh@1543 -- # continue 01:14:26.288 05:09:08 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 01:14:26.288 05:09:08 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 01:14:26.288 05:09:08 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 01:14:26.288 05:09:08 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 01:14:26.288 05:09:08 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 01:14:26.288 05:09:08 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 01:14:26.288 05:09:08 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 01:14:26.288 05:09:08 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 01:14:26.288 05:09:08 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 01:14:26.288 05:09:08 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 01:14:26.288 05:09:08 -- common/autotest_common.sh@1531 -- # cut -d: -f2 01:14:26.288 05:09:08 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 01:14:26.288 05:09:08 -- common/autotest_common.sh@1531 -- # grep oacs 01:14:26.288 05:09:08 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 01:14:26.288 05:09:08 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 01:14:26.288 05:09:08 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 01:14:26.288 05:09:08 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 01:14:26.288 05:09:08 -- common/autotest_common.sh@1540 -- # grep unvmcap 01:14:26.288 05:09:08 -- common/autotest_common.sh@1540 -- # cut -d: -f2 01:14:26.288 05:09:08 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 01:14:26.288 05:09:08 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 01:14:26.288 05:09:08 -- common/autotest_common.sh@1543 -- # continue 01:14:26.288 05:09:08 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 01:14:26.288 05:09:08 -- common/autotest_common.sh@732 -- # xtrace_disable 01:14:26.288 05:09:08 -- common/autotest_common.sh@10 -- # set +x 01:14:26.288 05:09:08 -- spdk/autotest.sh@125 -- # timing_enter afterboot 01:14:26.288 05:09:08 -- common/autotest_common.sh@726 -- # xtrace_disable 01:14:26.288 05:09:08 -- common/autotest_common.sh@10 -- # set +x 01:14:26.288 05:09:08 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:14:27.233 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:14:27.233 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 01:14:27.233 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 01:14:27.233 05:09:09 -- spdk/autotest.sh@127 -- # timing_exit afterboot 01:14:27.233 05:09:09 -- common/autotest_common.sh@732 -- # xtrace_disable 01:14:27.233 05:09:09 -- common/autotest_common.sh@10 -- # set +x 01:14:27.233 05:09:09 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 01:14:27.233 05:09:09 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 01:14:27.233 05:09:09 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 01:14:27.233 05:09:09 -- common/autotest_common.sh@1563 -- # bdfs=() 01:14:27.233 05:09:09 -- common/autotest_common.sh@1563 -- # _bdfs=() 01:14:27.233 05:09:09 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 01:14:27.233 05:09:09 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 01:14:27.233 05:09:09 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 01:14:27.233 05:09:09 -- common/autotest_common.sh@1498 -- # bdfs=() 01:14:27.233 05:09:09 -- common/autotest_common.sh@1498 -- # local bdfs 01:14:27.233 05:09:09 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 01:14:27.233 05:09:09 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 01:14:27.233 05:09:09 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 01:14:27.493 05:09:09 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 01:14:27.493 05:09:09 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 01:14:27.493 05:09:09 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 01:14:27.493 05:09:09 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 01:14:27.493 05:09:09 -- common/autotest_common.sh@1566 -- # device=0x0010 01:14:27.493 05:09:09 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 01:14:27.493 05:09:09 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 01:14:27.493 05:09:09 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 01:14:27.493 05:09:09 -- common/autotest_common.sh@1566 -- # device=0x0010 01:14:27.493 05:09:09 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 01:14:27.493 05:09:09 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 01:14:27.493 05:09:09 -- common/autotest_common.sh@1572 -- # return 0 01:14:27.493 05:09:09 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 01:14:27.493 05:09:09 -- common/autotest_common.sh@1580 -- # return 0 01:14:27.493 05:09:09 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 01:14:27.493 05:09:09 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 01:14:27.493 05:09:09 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 01:14:27.493 05:09:09 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 01:14:27.493 05:09:09 -- spdk/autotest.sh@149 -- # timing_enter lib 01:14:27.493 05:09:09 -- common/autotest_common.sh@726 -- # xtrace_disable 01:14:27.493 05:09:09 -- common/autotest_common.sh@10 -- # set +x 01:14:27.493 05:09:09 -- spdk/autotest.sh@151 -- # [[ 1 -eq 1 ]] 01:14:27.493 05:09:09 -- spdk/autotest.sh@152 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 01:14:27.493 05:09:09 -- spdk/autotest.sh@152 -- # SPDK_SOCK_IMPL_DEFAULT=uring 01:14:27.493 05:09:09 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 01:14:27.493 05:09:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:14:27.493 05:09:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:14:27.493 05:09:09 -- common/autotest_common.sh@10 -- # set +x 01:14:27.493 ************************************ 01:14:27.493 START TEST env 01:14:27.493 ************************************ 01:14:27.493 05:09:09 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 01:14:27.493 * Looking for test storage... 01:14:27.493 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 01:14:27.493 05:09:09 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:14:27.493 05:09:09 env -- common/autotest_common.sh@1693 -- # lcov --version 01:14:27.493 05:09:09 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:14:27.493 05:09:09 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:14:27.493 05:09:09 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:14:27.493 05:09:09 env -- scripts/common.sh@333 -- # local ver1 ver1_l 01:14:27.493 05:09:09 env -- scripts/common.sh@334 -- # local ver2 ver2_l 01:14:27.493 05:09:09 env -- scripts/common.sh@336 -- # IFS=.-: 01:14:27.493 05:09:09 env -- scripts/common.sh@336 -- # read -ra ver1 01:14:27.493 05:09:09 env -- scripts/common.sh@337 -- # IFS=.-: 01:14:27.493 05:09:09 env -- scripts/common.sh@337 -- # read -ra ver2 01:14:27.493 05:09:09 env -- scripts/common.sh@338 -- # local 'op=<' 01:14:27.493 05:09:09 env -- scripts/common.sh@340 -- # ver1_l=2 01:14:27.493 05:09:09 env -- scripts/common.sh@341 -- # ver2_l=1 01:14:27.493 05:09:09 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:14:27.493 05:09:09 env -- scripts/common.sh@344 -- # case "$op" in 01:14:27.493 05:09:09 env -- scripts/common.sh@345 -- # : 1 01:14:27.493 05:09:09 env -- scripts/common.sh@364 -- # (( v = 0 )) 01:14:27.493 05:09:09 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:14:27.493 05:09:09 env -- scripts/common.sh@365 -- # decimal 1 01:14:27.493 05:09:09 env -- scripts/common.sh@353 -- # local d=1 01:14:27.493 05:09:09 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:14:27.493 05:09:09 env -- scripts/common.sh@355 -- # echo 1 01:14:27.493 05:09:09 env -- scripts/common.sh@365 -- # ver1[v]=1 01:14:27.493 05:09:09 env -- scripts/common.sh@366 -- # decimal 2 01:14:27.493 05:09:09 env -- scripts/common.sh@353 -- # local d=2 01:14:27.493 05:09:09 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:14:27.493 05:09:09 env -- scripts/common.sh@355 -- # echo 2 01:14:27.493 05:09:09 env -- scripts/common.sh@366 -- # ver2[v]=2 01:14:27.493 05:09:09 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:14:27.493 05:09:09 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:14:27.493 05:09:09 env -- scripts/common.sh@368 -- # return 0 01:14:27.493 05:09:09 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:14:27.493 05:09:09 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:14:27.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:14:27.493 --rc genhtml_branch_coverage=1 01:14:27.493 --rc genhtml_function_coverage=1 01:14:27.493 --rc genhtml_legend=1 01:14:27.493 --rc geninfo_all_blocks=1 01:14:27.493 --rc geninfo_unexecuted_blocks=1 01:14:27.493 01:14:27.493 ' 01:14:27.494 05:09:09 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:14:27.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:14:27.494 --rc genhtml_branch_coverage=1 01:14:27.494 --rc genhtml_function_coverage=1 01:14:27.494 --rc genhtml_legend=1 01:14:27.494 --rc geninfo_all_blocks=1 01:14:27.494 --rc geninfo_unexecuted_blocks=1 01:14:27.494 01:14:27.494 ' 01:14:27.494 05:09:09 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:14:27.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:14:27.494 --rc genhtml_branch_coverage=1 01:14:27.494 --rc genhtml_function_coverage=1 01:14:27.494 --rc genhtml_legend=1 01:14:27.494 --rc geninfo_all_blocks=1 01:14:27.494 --rc geninfo_unexecuted_blocks=1 01:14:27.494 01:14:27.494 ' 01:14:27.494 05:09:09 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:14:27.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:14:27.494 --rc genhtml_branch_coverage=1 01:14:27.494 --rc genhtml_function_coverage=1 01:14:27.494 --rc genhtml_legend=1 01:14:27.494 --rc geninfo_all_blocks=1 01:14:27.494 --rc geninfo_unexecuted_blocks=1 01:14:27.494 01:14:27.494 ' 01:14:27.494 05:09:09 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 01:14:27.494 05:09:09 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:14:27.494 05:09:09 env -- common/autotest_common.sh@1111 -- # xtrace_disable 01:14:27.494 05:09:09 env -- common/autotest_common.sh@10 -- # set +x 01:14:27.494 ************************************ 01:14:27.494 START TEST env_memory 01:14:27.494 ************************************ 01:14:27.494 05:09:09 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 01:14:27.494 01:14:27.494 01:14:27.494 CUnit - A unit testing framework for C - Version 2.1-3 01:14:27.494 http://cunit.sourceforge.net/ 01:14:27.494 01:14:27.494 01:14:27.494 Suite: mem_map_2mb 01:14:27.753 Test: alloc and free memory map ...[2024-12-09 05:09:09.979049] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 311:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 01:14:27.753 passed 01:14:27.753 Test: mem map translation ...[2024-12-09 05:09:10.004093] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 629:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 01:14:27.753 [2024-12-09 05:09:10.004182] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 629:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 01:14:27.753 [2024-12-09 05:09:10.004243] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 623:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 01:14:27.753 [2024-12-09 05:09:10.004254] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 639:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 01:14:27.753 passed 01:14:27.753 Test: mem map registration ...[2024-12-09 05:09:10.052994] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 381:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 01:14:27.754 [2024-12-09 05:09:10.053081] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 381:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 01:14:27.754 passed 01:14:27.754 Test: mem map adjacent registrations ...passed 01:14:27.754 Suite: mem_map_4kb 01:14:27.754 Test: alloc and free memory map ...[2024-12-09 05:09:10.175293] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 311:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 01:14:27.754 passed 01:14:27.754 Test: mem map translation ...[2024-12-09 05:09:10.202472] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 629:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=4096 len=1234 01:14:27.754 [2024-12-09 05:09:10.202558] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 629:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=4096 01:14:28.013 [2024-12-09 05:09:10.222977] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 623:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 01:14:28.013 [2024-12-09 05:09:10.223058] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 639:spdk_mem_map_set_translation: *ERROR*: could not get 0xfffffffff000 map 01:14:28.013 passed 01:14:28.013 Test: mem map registration ...[2024-12-09 05:09:10.309595] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 381:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=1000 len=1234 01:14:28.013 [2024-12-09 05:09:10.309669] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 381:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=4096 01:14:28.013 passed 01:14:28.013 Test: mem map adjacent registrations ...passed 01:14:28.013 01:14:28.013 Run Summary: Type Total Ran Passed Failed Inactive 01:14:28.013 suites 2 2 n/a 0 0 01:14:28.013 tests 8 8 8 0 0 01:14:28.013 asserts 304 304 304 0 n/a 01:14:28.013 01:14:28.013 Elapsed time = 0.467 seconds 01:14:28.013 01:14:28.013 real 0m0.489s 01:14:28.013 user 0m0.459s 01:14:28.013 sys 0m0.024s 01:14:28.013 05:09:10 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 01:14:28.013 05:09:10 env.env_memory -- common/autotest_common.sh@10 -- # set +x 01:14:28.013 ************************************ 01:14:28.013 END TEST env_memory 01:14:28.013 ************************************ 01:14:28.013 05:09:10 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 01:14:28.013 05:09:10 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:14:28.013 05:09:10 env -- common/autotest_common.sh@1111 -- # xtrace_disable 01:14:28.013 05:09:10 env -- common/autotest_common.sh@10 -- # set +x 01:14:28.288 ************************************ 01:14:28.288 START TEST env_vtophys 01:14:28.288 ************************************ 01:14:28.288 05:09:10 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 01:14:28.288 EAL: lib.eal log level changed from notice to debug 01:14:28.288 EAL: Detected lcore 0 as core 0 on socket 0 01:14:28.288 EAL: Detected lcore 1 as core 0 on socket 0 01:14:28.288 EAL: Detected lcore 2 as core 0 on socket 0 01:14:28.288 EAL: Detected lcore 3 as core 0 on socket 0 01:14:28.288 EAL: Detected lcore 4 as core 0 on socket 0 01:14:28.288 EAL: Detected lcore 5 as core 0 on socket 0 01:14:28.288 EAL: Detected lcore 6 as core 0 on socket 0 01:14:28.288 EAL: Detected lcore 7 as core 0 on socket 0 01:14:28.288 EAL: Detected lcore 8 as core 0 on socket 0 01:14:28.288 EAL: Detected lcore 9 as core 0 on socket 0 01:14:28.288 EAL: Maximum logical cores by configuration: 128 01:14:28.288 EAL: Detected CPU lcores: 10 01:14:28.288 EAL: Detected NUMA nodes: 1 01:14:28.288 EAL: Checking presence of .so 'librte_eal.so.24.1' 01:14:28.288 EAL: Detected shared linkage of DPDK 01:14:28.288 EAL: No shared files mode enabled, IPC will be disabled 01:14:28.288 EAL: Selected IOVA mode 'PA' 01:14:28.288 EAL: Probing VFIO support... 01:14:28.288 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 01:14:28.288 EAL: VFIO modules not loaded, skipping VFIO support... 01:14:28.288 EAL: Ask a virtual area of 0x2e000 bytes 01:14:28.288 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 01:14:28.288 EAL: Setting up physically contiguous memory... 01:14:28.288 EAL: Setting maximum number of open files to 524288 01:14:28.288 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 01:14:28.288 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 01:14:28.288 EAL: Ask a virtual area of 0x61000 bytes 01:14:28.288 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 01:14:28.288 EAL: Memseg list allocated at socket 0, page size 0x800kB 01:14:28.288 EAL: Ask a virtual area of 0x400000000 bytes 01:14:28.288 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 01:14:28.288 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 01:14:28.288 EAL: Ask a virtual area of 0x61000 bytes 01:14:28.288 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 01:14:28.288 EAL: Memseg list allocated at socket 0, page size 0x800kB 01:14:28.288 EAL: Ask a virtual area of 0x400000000 bytes 01:14:28.288 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 01:14:28.288 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 01:14:28.288 EAL: Ask a virtual area of 0x61000 bytes 01:14:28.288 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 01:14:28.288 EAL: Memseg list allocated at socket 0, page size 0x800kB 01:14:28.288 EAL: Ask a virtual area of 0x400000000 bytes 01:14:28.288 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 01:14:28.288 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 01:14:28.288 EAL: Ask a virtual area of 0x61000 bytes 01:14:28.288 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 01:14:28.288 EAL: Memseg list allocated at socket 0, page size 0x800kB 01:14:28.288 EAL: Ask a virtual area of 0x400000000 bytes 01:14:28.288 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 01:14:28.288 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 01:14:28.288 EAL: Hugepages will be freed exactly as allocated. 01:14:28.288 EAL: No shared files mode enabled, IPC is disabled 01:14:28.288 EAL: No shared files mode enabled, IPC is disabled 01:14:28.288 EAL: TSC frequency is ~2290000 KHz 01:14:28.288 EAL: Main lcore 0 is ready (tid=7f83e203aa00;cpuset=[0]) 01:14:28.288 EAL: Trying to obtain current memory policy. 01:14:28.288 EAL: Setting policy MPOL_PREFERRED for socket 0 01:14:28.288 EAL: Restoring previous memory policy: 0 01:14:28.288 EAL: request: mp_malloc_sync 01:14:28.288 EAL: No shared files mode enabled, IPC is disabled 01:14:28.288 EAL: Heap on socket 0 was expanded by 2MB 01:14:28.288 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 01:14:28.288 EAL: No PCI address specified using 'addr=' in: bus=pci 01:14:28.288 EAL: Mem event callback 'spdk:(nil)' registered 01:14:28.288 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 01:14:28.288 01:14:28.288 01:14:28.288 CUnit - A unit testing framework for C - Version 2.1-3 01:14:28.288 http://cunit.sourceforge.net/ 01:14:28.288 01:14:28.288 01:14:28.288 Suite: components_suite 01:14:28.288 Test: vtophys_malloc_test ...passed 01:14:28.288 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 01:14:28.288 EAL: Setting policy MPOL_PREFERRED for socket 0 01:14:28.288 EAL: Restoring previous memory policy: 4 01:14:28.288 EAL: Calling mem event callback 'spdk:(nil)' 01:14:28.288 EAL: request: mp_malloc_sync 01:14:28.288 EAL: No shared files mode enabled, IPC is disabled 01:14:28.288 EAL: Heap on socket 0 was expanded by 4MB 01:14:28.288 EAL: Calling mem event callback 'spdk:(nil)' 01:14:28.288 EAL: request: mp_malloc_sync 01:14:28.288 EAL: No shared files mode enabled, IPC is disabled 01:14:28.288 EAL: Heap on socket 0 was shrunk by 4MB 01:14:28.288 EAL: Trying to obtain current memory policy. 01:14:28.288 EAL: Setting policy MPOL_PREFERRED for socket 0 01:14:28.288 EAL: Restoring previous memory policy: 4 01:14:28.288 EAL: Calling mem event callback 'spdk:(nil)' 01:14:28.288 EAL: request: mp_malloc_sync 01:14:28.288 EAL: No shared files mode enabled, IPC is disabled 01:14:28.288 EAL: Heap on socket 0 was expanded by 6MB 01:14:28.288 EAL: Calling mem event callback 'spdk:(nil)' 01:14:28.288 EAL: request: mp_malloc_sync 01:14:28.288 EAL: No shared files mode enabled, IPC is disabled 01:14:28.288 EAL: Heap on socket 0 was shrunk by 6MB 01:14:28.288 EAL: Trying to obtain current memory policy. 01:14:28.288 EAL: Setting policy MPOL_PREFERRED for socket 0 01:14:28.288 EAL: Restoring previous memory policy: 4 01:14:28.288 EAL: Calling mem event callback 'spdk:(nil)' 01:14:28.288 EAL: request: mp_malloc_sync 01:14:28.288 EAL: No shared files mode enabled, IPC is disabled 01:14:28.288 EAL: Heap on socket 0 was expanded by 10MB 01:14:28.288 EAL: Calling mem event callback 'spdk:(nil)' 01:14:28.288 EAL: request: mp_malloc_sync 01:14:28.288 EAL: No shared files mode enabled, IPC is disabled 01:14:28.288 EAL: Heap on socket 0 was shrunk by 10MB 01:14:28.288 EAL: Trying to obtain current memory policy. 01:14:28.288 EAL: Setting policy MPOL_PREFERRED for socket 0 01:14:28.289 EAL: Restoring previous memory policy: 4 01:14:28.289 EAL: Calling mem event callback 'spdk:(nil)' 01:14:28.289 EAL: request: mp_malloc_sync 01:14:28.289 EAL: No shared files mode enabled, IPC is disabled 01:14:28.289 EAL: Heap on socket 0 was expanded by 18MB 01:14:28.289 EAL: Calling mem event callback 'spdk:(nil)' 01:14:28.289 EAL: request: mp_malloc_sync 01:14:28.289 EAL: No shared files mode enabled, IPC is disabled 01:14:28.289 EAL: Heap on socket 0 was shrunk by 18MB 01:14:28.289 EAL: Trying to obtain current memory policy. 01:14:28.289 EAL: Setting policy MPOL_PREFERRED for socket 0 01:14:28.289 EAL: Restoring previous memory policy: 4 01:14:28.289 EAL: Calling mem event callback 'spdk:(nil)' 01:14:28.289 EAL: request: mp_malloc_sync 01:14:28.289 EAL: No shared files mode enabled, IPC is disabled 01:14:28.289 EAL: Heap on socket 0 was expanded by 34MB 01:14:28.289 EAL: Calling mem event callback 'spdk:(nil)' 01:14:28.289 EAL: request: mp_malloc_sync 01:14:28.289 EAL: No shared files mode enabled, IPC is disabled 01:14:28.289 EAL: Heap on socket 0 was shrunk by 34MB 01:14:28.289 EAL: Trying to obtain current memory policy. 01:14:28.289 EAL: Setting policy MPOL_PREFERRED for socket 0 01:14:28.289 EAL: Restoring previous memory policy: 4 01:14:28.289 EAL: Calling mem event callback 'spdk:(nil)' 01:14:28.289 EAL: request: mp_malloc_sync 01:14:28.289 EAL: No shared files mode enabled, IPC is disabled 01:14:28.289 EAL: Heap on socket 0 was expanded by 66MB 01:14:28.289 EAL: Calling mem event callback 'spdk:(nil)' 01:14:28.289 EAL: request: mp_malloc_sync 01:14:28.289 EAL: No shared files mode enabled, IPC is disabled 01:14:28.289 EAL: Heap on socket 0 was shrunk by 66MB 01:14:28.289 EAL: Trying to obtain current memory policy. 01:14:28.289 EAL: Setting policy MPOL_PREFERRED for socket 0 01:14:28.289 EAL: Restoring previous memory policy: 4 01:14:28.289 EAL: Calling mem event callback 'spdk:(nil)' 01:14:28.289 EAL: request: mp_malloc_sync 01:14:28.289 EAL: No shared files mode enabled, IPC is disabled 01:14:28.289 EAL: Heap on socket 0 was expanded by 130MB 01:14:28.562 EAL: Calling mem event callback 'spdk:(nil)' 01:14:28.562 EAL: request: mp_malloc_sync 01:14:28.562 EAL: No shared files mode enabled, IPC is disabled 01:14:28.562 EAL: Heap on socket 0 was shrunk by 130MB 01:14:28.562 EAL: Trying to obtain current memory policy. 01:14:28.562 EAL: Setting policy MPOL_PREFERRED for socket 0 01:14:28.562 EAL: Restoring previous memory policy: 4 01:14:28.562 EAL: Calling mem event callback 'spdk:(nil)' 01:14:28.562 EAL: request: mp_malloc_sync 01:14:28.562 EAL: No shared files mode enabled, IPC is disabled 01:14:28.562 EAL: Heap on socket 0 was expanded by 258MB 01:14:28.562 EAL: Calling mem event callback 'spdk:(nil)' 01:14:28.562 EAL: request: mp_malloc_sync 01:14:28.562 EAL: No shared files mode enabled, IPC is disabled 01:14:28.562 EAL: Heap on socket 0 was shrunk by 258MB 01:14:28.562 EAL: Trying to obtain current memory policy. 01:14:28.562 EAL: Setting policy MPOL_PREFERRED for socket 0 01:14:28.562 EAL: Restoring previous memory policy: 4 01:14:28.562 EAL: Calling mem event callback 'spdk:(nil)' 01:14:28.562 EAL: request: mp_malloc_sync 01:14:28.562 EAL: No shared files mode enabled, IPC is disabled 01:14:28.562 EAL: Heap on socket 0 was expanded by 514MB 01:14:28.825 EAL: Calling mem event callback 'spdk:(nil)' 01:14:28.825 EAL: request: mp_malloc_sync 01:14:28.825 EAL: No shared files mode enabled, IPC is disabled 01:14:28.825 EAL: Heap on socket 0 was shrunk by 514MB 01:14:28.825 EAL: Trying to obtain current memory policy. 01:14:28.825 EAL: Setting policy MPOL_PREFERRED for socket 0 01:14:29.083 EAL: Restoring previous memory policy: 4 01:14:29.083 EAL: Calling mem event callback 'spdk:(nil)' 01:14:29.083 EAL: request: mp_malloc_sync 01:14:29.083 EAL: No shared files mode enabled, IPC is disabled 01:14:29.083 EAL: Heap on socket 0 was expanded by 1026MB 01:14:29.083 EAL: Calling mem event callback 'spdk:(nil)' 01:14:29.342 passed 01:14:29.342 01:14:29.342 Run Summary: Type Total Ran Passed Failed Inactive 01:14:29.342 suites 1 1 n/a 0 0 01:14:29.342 tests 2 2 2 0 0 01:14:29.342 asserts 5638 5638 5638 0 n/a 01:14:29.342 01:14:29.342 Elapsed time = 1.004 seconds 01:14:29.342 EAL: request: mp_malloc_sync 01:14:29.342 EAL: No shared files mode enabled, IPC is disabled 01:14:29.342 EAL: Heap on socket 0 was shrunk by 1026MB 01:14:29.342 EAL: Calling mem event callback 'spdk:(nil)' 01:14:29.342 EAL: request: mp_malloc_sync 01:14:29.342 EAL: No shared files mode enabled, IPC is disabled 01:14:29.342 EAL: Heap on socket 0 was shrunk by 2MB 01:14:29.342 EAL: No shared files mode enabled, IPC is disabled 01:14:29.342 EAL: No shared files mode enabled, IPC is disabled 01:14:29.342 EAL: No shared files mode enabled, IPC is disabled 01:14:29.342 01:14:29.342 real 0m1.213s 01:14:29.342 user 0m0.650s 01:14:29.342 sys 0m0.430s 01:14:29.342 05:09:11 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 01:14:29.342 05:09:11 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 01:14:29.342 ************************************ 01:14:29.342 END TEST env_vtophys 01:14:29.342 ************************************ 01:14:29.342 05:09:11 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 01:14:29.342 05:09:11 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:14:29.342 05:09:11 env -- common/autotest_common.sh@1111 -- # xtrace_disable 01:14:29.342 05:09:11 env -- common/autotest_common.sh@10 -- # set +x 01:14:29.342 ************************************ 01:14:29.342 START TEST env_pci 01:14:29.342 ************************************ 01:14:29.342 05:09:11 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 01:14:29.342 01:14:29.342 01:14:29.342 CUnit - A unit testing framework for C - Version 2.1-3 01:14:29.342 http://cunit.sourceforge.net/ 01:14:29.342 01:14:29.342 01:14:29.342 Suite: pci 01:14:29.342 Test: pci_hook ...[2024-12-09 05:09:11.772110] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56755 has claimed it 01:14:29.342 passed 01:14:29.342 01:14:29.342 Run Summary: Type Total Ran Passed Failed Inactive 01:14:29.342 suites 1 1 n/a 0 0 01:14:29.342 tests 1 1 1 0 0 01:14:29.342 asserts 25 25 25 0 n/a 01:14:29.342 01:14:29.342 Elapsed time = 0.002 seconds 01:14:29.342 EAL: Cannot find device (10000:00:01.0) 01:14:29.342 EAL: Failed to attach device on primary process 01:14:29.342 01:14:29.342 real 0m0.028s 01:14:29.342 user 0m0.018s 01:14:29.342 sys 0m0.010s 01:14:29.342 05:09:11 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 01:14:29.342 05:09:11 env.env_pci -- common/autotest_common.sh@10 -- # set +x 01:14:29.342 ************************************ 01:14:29.342 END TEST env_pci 01:14:29.342 ************************************ 01:14:29.601 05:09:11 env -- env/env.sh@14 -- # argv='-c 0x1 ' 01:14:29.601 05:09:11 env -- env/env.sh@15 -- # uname 01:14:29.601 05:09:11 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 01:14:29.601 05:09:11 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 01:14:29.601 05:09:11 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 01:14:29.601 05:09:11 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 01:14:29.601 05:09:11 env -- common/autotest_common.sh@1111 -- # xtrace_disable 01:14:29.601 05:09:11 env -- common/autotest_common.sh@10 -- # set +x 01:14:29.601 ************************************ 01:14:29.601 START TEST env_dpdk_post_init 01:14:29.601 ************************************ 01:14:29.601 05:09:11 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 01:14:29.601 EAL: Detected CPU lcores: 10 01:14:29.601 EAL: Detected NUMA nodes: 1 01:14:29.601 EAL: Detected shared linkage of DPDK 01:14:29.601 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 01:14:29.601 EAL: Selected IOVA mode 'PA' 01:14:29.601 TELEMETRY: No legacy callbacks, legacy socket not created 01:14:29.601 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 01:14:29.601 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 01:14:29.601 Starting DPDK initialization... 01:14:29.601 Starting SPDK post initialization... 01:14:29.601 SPDK NVMe probe 01:14:29.601 Attaching to 0000:00:10.0 01:14:29.601 Attaching to 0000:00:11.0 01:14:29.601 Attached to 0000:00:10.0 01:14:29.601 Attached to 0000:00:11.0 01:14:29.601 Cleaning up... 01:14:29.601 01:14:29.601 real 0m0.202s 01:14:29.601 user 0m0.061s 01:14:29.601 sys 0m0.041s 01:14:29.601 05:09:12 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 01:14:29.601 05:09:12 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 01:14:29.601 ************************************ 01:14:29.601 END TEST env_dpdk_post_init 01:14:29.601 ************************************ 01:14:29.860 05:09:12 env -- env/env.sh@26 -- # uname 01:14:29.860 05:09:12 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 01:14:29.860 05:09:12 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 01:14:29.860 05:09:12 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:14:29.860 05:09:12 env -- common/autotest_common.sh@1111 -- # xtrace_disable 01:14:29.860 05:09:12 env -- common/autotest_common.sh@10 -- # set +x 01:14:29.860 ************************************ 01:14:29.860 START TEST env_mem_callbacks 01:14:29.860 ************************************ 01:14:29.860 05:09:12 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 01:14:29.860 EAL: Detected CPU lcores: 10 01:14:29.860 EAL: Detected NUMA nodes: 1 01:14:29.860 EAL: Detected shared linkage of DPDK 01:14:29.860 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 01:14:29.860 EAL: Selected IOVA mode 'PA' 01:14:29.860 01:14:29.860 01:14:29.860 CUnit - A unit testing framework for C - Version 2.1-3 01:14:29.860 http://cunit.sourceforge.net/ 01:14:29.860 01:14:29.860 01:14:29.860 Suite: memory 01:14:29.860 Test: test ... 01:14:29.860 register 0x200000200000 2097152 01:14:29.860 malloc 3145728 01:14:29.860 TELEMETRY: No legacy callbacks, legacy socket not created 01:14:29.860 register 0x200000400000 4194304 01:14:29.860 buf 0x200000500000 len 3145728 PASSED 01:14:29.860 malloc 64 01:14:29.860 buf 0x2000004fff40 len 64 PASSED 01:14:29.860 malloc 4194304 01:14:29.860 register 0x200000800000 6291456 01:14:29.860 buf 0x200000a00000 len 4194304 PASSED 01:14:29.860 free 0x200000500000 3145728 01:14:29.860 free 0x2000004fff40 64 01:14:29.860 unregister 0x200000400000 4194304 PASSED 01:14:29.860 free 0x200000a00000 4194304 01:14:29.860 unregister 0x200000800000 6291456 PASSED 01:14:29.860 malloc 8388608 01:14:29.860 register 0x200000400000 10485760 01:14:29.860 buf 0x200000600000 len 8388608 PASSED 01:14:29.860 free 0x200000600000 8388608 01:14:29.860 unregister 0x200000400000 10485760 PASSED 01:14:29.860 passed 01:14:29.860 01:14:29.860 Run Summary: Type Total Ran Passed Failed Inactive 01:14:29.860 suites 1 1 n/a 0 0 01:14:29.860 tests 1 1 1 0 0 01:14:29.860 asserts 15 15 15 0 n/a 01:14:29.860 01:14:29.860 Elapsed time = 0.010 seconds 01:14:29.860 01:14:29.860 real 0m0.149s 01:14:29.860 user 0m0.020s 01:14:29.860 sys 0m0.027s 01:14:29.860 05:09:12 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 01:14:29.860 05:09:12 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 01:14:29.860 ************************************ 01:14:29.860 END TEST env_mem_callbacks 01:14:29.860 ************************************ 01:14:30.120 01:14:30.120 real 0m2.599s 01:14:30.120 user 0m1.410s 01:14:30.120 sys 0m0.856s 01:14:30.120 05:09:12 env -- common/autotest_common.sh@1130 -- # xtrace_disable 01:14:30.120 05:09:12 env -- common/autotest_common.sh@10 -- # set +x 01:14:30.120 ************************************ 01:14:30.120 END TEST env 01:14:30.120 ************************************ 01:14:30.120 05:09:12 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 01:14:30.120 05:09:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:14:30.120 05:09:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:14:30.120 05:09:12 -- common/autotest_common.sh@10 -- # set +x 01:14:30.120 ************************************ 01:14:30.120 START TEST rpc 01:14:30.120 ************************************ 01:14:30.120 05:09:12 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 01:14:30.120 * Looking for test storage... 01:14:30.120 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 01:14:30.120 05:09:12 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:14:30.120 05:09:12 rpc -- common/autotest_common.sh@1693 -- # lcov --version 01:14:30.120 05:09:12 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:14:30.380 05:09:12 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:14:30.380 05:09:12 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:14:30.380 05:09:12 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 01:14:30.380 05:09:12 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 01:14:30.380 05:09:12 rpc -- scripts/common.sh@336 -- # IFS=.-: 01:14:30.380 05:09:12 rpc -- scripts/common.sh@336 -- # read -ra ver1 01:14:30.380 05:09:12 rpc -- scripts/common.sh@337 -- # IFS=.-: 01:14:30.380 05:09:12 rpc -- scripts/common.sh@337 -- # read -ra ver2 01:14:30.380 05:09:12 rpc -- scripts/common.sh@338 -- # local 'op=<' 01:14:30.380 05:09:12 rpc -- scripts/common.sh@340 -- # ver1_l=2 01:14:30.380 05:09:12 rpc -- scripts/common.sh@341 -- # ver2_l=1 01:14:30.380 05:09:12 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:14:30.380 05:09:12 rpc -- scripts/common.sh@344 -- # case "$op" in 01:14:30.380 05:09:12 rpc -- scripts/common.sh@345 -- # : 1 01:14:30.380 05:09:12 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 01:14:30.380 05:09:12 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:14:30.380 05:09:12 rpc -- scripts/common.sh@365 -- # decimal 1 01:14:30.380 05:09:12 rpc -- scripts/common.sh@353 -- # local d=1 01:14:30.380 05:09:12 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:14:30.380 05:09:12 rpc -- scripts/common.sh@355 -- # echo 1 01:14:30.380 05:09:12 rpc -- scripts/common.sh@365 -- # ver1[v]=1 01:14:30.380 05:09:12 rpc -- scripts/common.sh@366 -- # decimal 2 01:14:30.380 05:09:12 rpc -- scripts/common.sh@353 -- # local d=2 01:14:30.380 05:09:12 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:14:30.380 05:09:12 rpc -- scripts/common.sh@355 -- # echo 2 01:14:30.380 05:09:12 rpc -- scripts/common.sh@366 -- # ver2[v]=2 01:14:30.380 05:09:12 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:14:30.380 05:09:12 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:14:30.380 05:09:12 rpc -- scripts/common.sh@368 -- # return 0 01:14:30.380 05:09:12 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:14:30.380 05:09:12 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:14:30.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:14:30.380 --rc genhtml_branch_coverage=1 01:14:30.380 --rc genhtml_function_coverage=1 01:14:30.380 --rc genhtml_legend=1 01:14:30.380 --rc geninfo_all_blocks=1 01:14:30.380 --rc geninfo_unexecuted_blocks=1 01:14:30.380 01:14:30.380 ' 01:14:30.380 05:09:12 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:14:30.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:14:30.380 --rc genhtml_branch_coverage=1 01:14:30.380 --rc genhtml_function_coverage=1 01:14:30.380 --rc genhtml_legend=1 01:14:30.380 --rc geninfo_all_blocks=1 01:14:30.380 --rc geninfo_unexecuted_blocks=1 01:14:30.380 01:14:30.380 ' 01:14:30.380 05:09:12 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:14:30.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:14:30.380 --rc genhtml_branch_coverage=1 01:14:30.380 --rc genhtml_function_coverage=1 01:14:30.380 --rc genhtml_legend=1 01:14:30.380 --rc geninfo_all_blocks=1 01:14:30.380 --rc geninfo_unexecuted_blocks=1 01:14:30.380 01:14:30.380 ' 01:14:30.380 05:09:12 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:14:30.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:14:30.380 --rc genhtml_branch_coverage=1 01:14:30.380 --rc genhtml_function_coverage=1 01:14:30.380 --rc genhtml_legend=1 01:14:30.380 --rc geninfo_all_blocks=1 01:14:30.380 --rc geninfo_unexecuted_blocks=1 01:14:30.380 01:14:30.380 ' 01:14:30.380 05:09:12 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 01:14:30.380 05:09:12 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56878 01:14:30.380 05:09:12 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 01:14:30.380 05:09:12 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56878 01:14:30.380 05:09:12 rpc -- common/autotest_common.sh@835 -- # '[' -z 56878 ']' 01:14:30.380 05:09:12 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:14:30.380 05:09:12 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 01:14:30.380 05:09:12 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:14:30.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:14:30.380 05:09:12 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 01:14:30.380 05:09:12 rpc -- common/autotest_common.sh@10 -- # set +x 01:14:30.380 [2024-12-09 05:09:12.716723] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:14:30.380 [2024-12-09 05:09:12.717356] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56878 ] 01:14:30.641 [2024-12-09 05:09:12.874417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:14:30.641 [2024-12-09 05:09:12.939052] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 01:14:30.641 [2024-12-09 05:09:12.939114] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56878' to capture a snapshot of events at runtime. 01:14:30.641 [2024-12-09 05:09:12.939122] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:14:30.641 [2024-12-09 05:09:12.939127] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:14:30.641 [2024-12-09 05:09:12.939131] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56878 for offline analysis/debug. 01:14:30.641 [2024-12-09 05:09:12.939504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:14:30.641 [2024-12-09 05:09:12.998787] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:14:31.211 05:09:13 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:14:31.211 05:09:13 rpc -- common/autotest_common.sh@868 -- # return 0 01:14:31.211 05:09:13 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 01:14:31.211 05:09:13 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 01:14:31.211 05:09:13 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 01:14:31.211 05:09:13 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 01:14:31.211 05:09:13 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:14:31.211 05:09:13 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 01:14:31.211 05:09:13 rpc -- common/autotest_common.sh@10 -- # set +x 01:14:31.211 ************************************ 01:14:31.211 START TEST rpc_integrity 01:14:31.211 ************************************ 01:14:31.211 05:09:13 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 01:14:31.211 05:09:13 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 01:14:31.211 05:09:13 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:31.211 05:09:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 01:14:31.211 05:09:13 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:31.211 05:09:13 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 01:14:31.211 05:09:13 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 01:14:31.471 05:09:13 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 01:14:31.471 05:09:13 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 01:14:31.471 05:09:13 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:31.471 05:09:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 01:14:31.471 05:09:13 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:31.471 05:09:13 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 01:14:31.471 05:09:13 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 01:14:31.471 05:09:13 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:31.471 05:09:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 01:14:31.471 05:09:13 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:31.471 05:09:13 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 01:14:31.471 { 01:14:31.471 "name": "Malloc0", 01:14:31.471 "aliases": [ 01:14:31.471 "d4ef85e4-c57f-4660-8a22-71c5735d0b72" 01:14:31.471 ], 01:14:31.471 "product_name": "Malloc disk", 01:14:31.471 "block_size": 512, 01:14:31.471 "num_blocks": 16384, 01:14:31.471 "uuid": "d4ef85e4-c57f-4660-8a22-71c5735d0b72", 01:14:31.471 "assigned_rate_limits": { 01:14:31.471 "rw_ios_per_sec": 0, 01:14:31.471 "rw_mbytes_per_sec": 0, 01:14:31.471 "r_mbytes_per_sec": 0, 01:14:31.471 "w_mbytes_per_sec": 0 01:14:31.471 }, 01:14:31.471 "claimed": false, 01:14:31.471 "zoned": false, 01:14:31.471 "supported_io_types": { 01:14:31.471 "read": true, 01:14:31.471 "write": true, 01:14:31.471 "unmap": true, 01:14:31.471 "flush": true, 01:14:31.471 "reset": true, 01:14:31.471 "nvme_admin": false, 01:14:31.471 "nvme_io": false, 01:14:31.471 "nvme_io_md": false, 01:14:31.471 "write_zeroes": true, 01:14:31.471 "zcopy": true, 01:14:31.471 "get_zone_info": false, 01:14:31.471 "zone_management": false, 01:14:31.471 "zone_append": false, 01:14:31.471 "compare": false, 01:14:31.471 "compare_and_write": false, 01:14:31.471 "abort": true, 01:14:31.471 "seek_hole": false, 01:14:31.471 "seek_data": false, 01:14:31.471 "copy": true, 01:14:31.471 "nvme_iov_md": false 01:14:31.471 }, 01:14:31.471 "memory_domains": [ 01:14:31.471 { 01:14:31.471 "dma_device_id": "system", 01:14:31.471 "dma_device_type": 1 01:14:31.471 }, 01:14:31.471 { 01:14:31.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:14:31.471 "dma_device_type": 2 01:14:31.471 } 01:14:31.471 ], 01:14:31.471 "driver_specific": {} 01:14:31.471 } 01:14:31.471 ]' 01:14:31.471 05:09:13 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 01:14:31.471 05:09:13 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 01:14:31.471 05:09:13 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 01:14:31.471 05:09:13 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:31.471 05:09:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 01:14:31.471 [2024-12-09 05:09:13.772847] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 01:14:31.471 [2024-12-09 05:09:13.772900] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:14:31.471 [2024-12-09 05:09:13.772918] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x188f050 01:14:31.471 [2024-12-09 05:09:13.772925] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:14:31.471 [2024-12-09 05:09:13.774529] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:14:31.471 [2024-12-09 05:09:13.774564] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 01:14:31.471 Passthru0 01:14:31.471 05:09:13 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:31.471 05:09:13 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 01:14:31.471 05:09:13 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:31.471 05:09:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 01:14:31.471 05:09:13 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:31.471 05:09:13 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 01:14:31.471 { 01:14:31.471 "name": "Malloc0", 01:14:31.471 "aliases": [ 01:14:31.471 "d4ef85e4-c57f-4660-8a22-71c5735d0b72" 01:14:31.471 ], 01:14:31.471 "product_name": "Malloc disk", 01:14:31.471 "block_size": 512, 01:14:31.471 "num_blocks": 16384, 01:14:31.471 "uuid": "d4ef85e4-c57f-4660-8a22-71c5735d0b72", 01:14:31.471 "assigned_rate_limits": { 01:14:31.471 "rw_ios_per_sec": 0, 01:14:31.471 "rw_mbytes_per_sec": 0, 01:14:31.471 "r_mbytes_per_sec": 0, 01:14:31.471 "w_mbytes_per_sec": 0 01:14:31.471 }, 01:14:31.471 "claimed": true, 01:14:31.471 "claim_type": "exclusive_write", 01:14:31.471 "zoned": false, 01:14:31.471 "supported_io_types": { 01:14:31.471 "read": true, 01:14:31.471 "write": true, 01:14:31.471 "unmap": true, 01:14:31.471 "flush": true, 01:14:31.471 "reset": true, 01:14:31.471 "nvme_admin": false, 01:14:31.471 "nvme_io": false, 01:14:31.471 "nvme_io_md": false, 01:14:31.471 "write_zeroes": true, 01:14:31.471 "zcopy": true, 01:14:31.471 "get_zone_info": false, 01:14:31.471 "zone_management": false, 01:14:31.471 "zone_append": false, 01:14:31.471 "compare": false, 01:14:31.471 "compare_and_write": false, 01:14:31.471 "abort": true, 01:14:31.471 "seek_hole": false, 01:14:31.471 "seek_data": false, 01:14:31.471 "copy": true, 01:14:31.471 "nvme_iov_md": false 01:14:31.471 }, 01:14:31.471 "memory_domains": [ 01:14:31.471 { 01:14:31.471 "dma_device_id": "system", 01:14:31.471 "dma_device_type": 1 01:14:31.471 }, 01:14:31.471 { 01:14:31.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:14:31.471 "dma_device_type": 2 01:14:31.471 } 01:14:31.471 ], 01:14:31.471 "driver_specific": {} 01:14:31.471 }, 01:14:31.471 { 01:14:31.471 "name": "Passthru0", 01:14:31.471 "aliases": [ 01:14:31.471 "ecf8b236-42c5-57f8-84d5-cb41fbd78f4e" 01:14:31.471 ], 01:14:31.471 "product_name": "passthru", 01:14:31.471 "block_size": 512, 01:14:31.471 "num_blocks": 16384, 01:14:31.471 "uuid": "ecf8b236-42c5-57f8-84d5-cb41fbd78f4e", 01:14:31.471 "assigned_rate_limits": { 01:14:31.471 "rw_ios_per_sec": 0, 01:14:31.471 "rw_mbytes_per_sec": 0, 01:14:31.471 "r_mbytes_per_sec": 0, 01:14:31.471 "w_mbytes_per_sec": 0 01:14:31.471 }, 01:14:31.471 "claimed": false, 01:14:31.471 "zoned": false, 01:14:31.471 "supported_io_types": { 01:14:31.471 "read": true, 01:14:31.472 "write": true, 01:14:31.472 "unmap": true, 01:14:31.472 "flush": true, 01:14:31.472 "reset": true, 01:14:31.472 "nvme_admin": false, 01:14:31.472 "nvme_io": false, 01:14:31.472 "nvme_io_md": false, 01:14:31.472 "write_zeroes": true, 01:14:31.472 "zcopy": true, 01:14:31.472 "get_zone_info": false, 01:14:31.472 "zone_management": false, 01:14:31.472 "zone_append": false, 01:14:31.472 "compare": false, 01:14:31.472 "compare_and_write": false, 01:14:31.472 "abort": true, 01:14:31.472 "seek_hole": false, 01:14:31.472 "seek_data": false, 01:14:31.472 "copy": true, 01:14:31.472 "nvme_iov_md": false 01:14:31.472 }, 01:14:31.472 "memory_domains": [ 01:14:31.472 { 01:14:31.472 "dma_device_id": "system", 01:14:31.472 "dma_device_type": 1 01:14:31.472 }, 01:14:31.472 { 01:14:31.472 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:14:31.472 "dma_device_type": 2 01:14:31.472 } 01:14:31.472 ], 01:14:31.472 "driver_specific": { 01:14:31.472 "passthru": { 01:14:31.472 "name": "Passthru0", 01:14:31.472 "base_bdev_name": "Malloc0" 01:14:31.472 } 01:14:31.472 } 01:14:31.472 } 01:14:31.472 ]' 01:14:31.472 05:09:13 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 01:14:31.472 05:09:13 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 01:14:31.472 05:09:13 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 01:14:31.472 05:09:13 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:31.472 05:09:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 01:14:31.472 05:09:13 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:31.472 05:09:13 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 01:14:31.472 05:09:13 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:31.472 05:09:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 01:14:31.472 05:09:13 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:31.472 05:09:13 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 01:14:31.472 05:09:13 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:31.472 05:09:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 01:14:31.472 05:09:13 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:31.472 05:09:13 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 01:14:31.472 05:09:13 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 01:14:31.731 05:09:13 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 01:14:31.731 01:14:31.731 real 0m0.316s 01:14:31.731 user 0m0.199s 01:14:31.731 sys 0m0.044s 01:14:31.731 05:09:13 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 01:14:31.731 05:09:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 01:14:31.731 ************************************ 01:14:31.731 END TEST rpc_integrity 01:14:31.731 ************************************ 01:14:31.731 05:09:14 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 01:14:31.731 05:09:14 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:14:31.731 05:09:14 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 01:14:31.731 05:09:14 rpc -- common/autotest_common.sh@10 -- # set +x 01:14:31.731 ************************************ 01:14:31.731 START TEST rpc_plugins 01:14:31.731 ************************************ 01:14:31.731 05:09:14 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 01:14:31.731 05:09:14 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 01:14:31.731 05:09:14 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:31.731 05:09:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 01:14:31.731 05:09:14 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:31.731 05:09:14 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 01:14:31.731 05:09:14 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 01:14:31.731 05:09:14 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:31.731 05:09:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 01:14:31.731 05:09:14 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:31.731 05:09:14 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 01:14:31.731 { 01:14:31.731 "name": "Malloc1", 01:14:31.731 "aliases": [ 01:14:31.731 "0007e3d6-219c-43ad-8424-bb8ceb840c04" 01:14:31.731 ], 01:14:31.731 "product_name": "Malloc disk", 01:14:31.731 "block_size": 4096, 01:14:31.731 "num_blocks": 256, 01:14:31.731 "uuid": "0007e3d6-219c-43ad-8424-bb8ceb840c04", 01:14:31.731 "assigned_rate_limits": { 01:14:31.731 "rw_ios_per_sec": 0, 01:14:31.731 "rw_mbytes_per_sec": 0, 01:14:31.731 "r_mbytes_per_sec": 0, 01:14:31.731 "w_mbytes_per_sec": 0 01:14:31.731 }, 01:14:31.731 "claimed": false, 01:14:31.731 "zoned": false, 01:14:31.731 "supported_io_types": { 01:14:31.731 "read": true, 01:14:31.731 "write": true, 01:14:31.731 "unmap": true, 01:14:31.731 "flush": true, 01:14:31.731 "reset": true, 01:14:31.731 "nvme_admin": false, 01:14:31.731 "nvme_io": false, 01:14:31.731 "nvme_io_md": false, 01:14:31.731 "write_zeroes": true, 01:14:31.731 "zcopy": true, 01:14:31.731 "get_zone_info": false, 01:14:31.731 "zone_management": false, 01:14:31.731 "zone_append": false, 01:14:31.731 "compare": false, 01:14:31.731 "compare_and_write": false, 01:14:31.731 "abort": true, 01:14:31.731 "seek_hole": false, 01:14:31.731 "seek_data": false, 01:14:31.731 "copy": true, 01:14:31.731 "nvme_iov_md": false 01:14:31.731 }, 01:14:31.731 "memory_domains": [ 01:14:31.731 { 01:14:31.731 "dma_device_id": "system", 01:14:31.731 "dma_device_type": 1 01:14:31.731 }, 01:14:31.731 { 01:14:31.731 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:14:31.731 "dma_device_type": 2 01:14:31.731 } 01:14:31.731 ], 01:14:31.731 "driver_specific": {} 01:14:31.731 } 01:14:31.731 ]' 01:14:31.731 05:09:14 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 01:14:31.731 05:09:14 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 01:14:31.731 05:09:14 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 01:14:31.731 05:09:14 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:31.731 05:09:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 01:14:31.731 05:09:14 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:31.731 05:09:14 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 01:14:31.731 05:09:14 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:31.731 05:09:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 01:14:31.731 05:09:14 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:31.731 05:09:14 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 01:14:31.731 05:09:14 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 01:14:31.731 05:09:14 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 01:14:31.731 01:14:31.731 real 0m0.159s 01:14:31.731 user 0m0.101s 01:14:31.731 sys 0m0.024s 01:14:31.731 05:09:14 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 01:14:31.731 05:09:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 01:14:31.731 ************************************ 01:14:31.731 END TEST rpc_plugins 01:14:31.731 ************************************ 01:14:31.990 05:09:14 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 01:14:31.990 05:09:14 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:14:31.990 05:09:14 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 01:14:31.990 05:09:14 rpc -- common/autotest_common.sh@10 -- # set +x 01:14:31.990 ************************************ 01:14:31.990 START TEST rpc_trace_cmd_test 01:14:31.990 ************************************ 01:14:31.990 05:09:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 01:14:31.990 05:09:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 01:14:31.990 05:09:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 01:14:31.990 05:09:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:31.990 05:09:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 01:14:31.990 05:09:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:31.990 05:09:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 01:14:31.990 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56878", 01:14:31.990 "tpoint_group_mask": "0x8", 01:14:31.990 "iscsi_conn": { 01:14:31.990 "mask": "0x2", 01:14:31.990 "tpoint_mask": "0x0" 01:14:31.990 }, 01:14:31.990 "scsi": { 01:14:31.990 "mask": "0x4", 01:14:31.990 "tpoint_mask": "0x0" 01:14:31.990 }, 01:14:31.990 "bdev": { 01:14:31.990 "mask": "0x8", 01:14:31.990 "tpoint_mask": "0xffffffffffffffff" 01:14:31.990 }, 01:14:31.990 "nvmf_rdma": { 01:14:31.990 "mask": "0x10", 01:14:31.990 "tpoint_mask": "0x0" 01:14:31.990 }, 01:14:31.990 "nvmf_tcp": { 01:14:31.990 "mask": "0x20", 01:14:31.990 "tpoint_mask": "0x0" 01:14:31.990 }, 01:14:31.990 "ftl": { 01:14:31.990 "mask": "0x40", 01:14:31.990 "tpoint_mask": "0x0" 01:14:31.990 }, 01:14:31.990 "blobfs": { 01:14:31.990 "mask": "0x80", 01:14:31.990 "tpoint_mask": "0x0" 01:14:31.990 }, 01:14:31.990 "dsa": { 01:14:31.990 "mask": "0x200", 01:14:31.990 "tpoint_mask": "0x0" 01:14:31.990 }, 01:14:31.990 "thread": { 01:14:31.990 "mask": "0x400", 01:14:31.990 "tpoint_mask": "0x0" 01:14:31.990 }, 01:14:31.990 "nvme_pcie": { 01:14:31.990 "mask": "0x800", 01:14:31.990 "tpoint_mask": "0x0" 01:14:31.990 }, 01:14:31.990 "iaa": { 01:14:31.990 "mask": "0x1000", 01:14:31.990 "tpoint_mask": "0x0" 01:14:31.990 }, 01:14:31.990 "nvme_tcp": { 01:14:31.990 "mask": "0x2000", 01:14:31.990 "tpoint_mask": "0x0" 01:14:31.990 }, 01:14:31.990 "bdev_nvme": { 01:14:31.990 "mask": "0x4000", 01:14:31.990 "tpoint_mask": "0x0" 01:14:31.990 }, 01:14:31.990 "sock": { 01:14:31.990 "mask": "0x8000", 01:14:31.990 "tpoint_mask": "0x0" 01:14:31.990 }, 01:14:31.990 "blob": { 01:14:31.990 "mask": "0x10000", 01:14:31.990 "tpoint_mask": "0x0" 01:14:31.990 }, 01:14:31.990 "bdev_raid": { 01:14:31.990 "mask": "0x20000", 01:14:31.990 "tpoint_mask": "0x0" 01:14:31.990 }, 01:14:31.990 "scheduler": { 01:14:31.990 "mask": "0x40000", 01:14:31.990 "tpoint_mask": "0x0" 01:14:31.990 } 01:14:31.990 }' 01:14:31.990 05:09:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 01:14:31.990 05:09:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 01:14:31.990 05:09:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 01:14:31.990 05:09:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 01:14:31.990 05:09:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 01:14:31.990 05:09:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 01:14:31.990 05:09:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 01:14:31.990 05:09:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 01:14:31.990 05:09:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 01:14:31.990 05:09:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 01:14:31.990 01:14:31.990 real 0m0.201s 01:14:31.990 user 0m0.155s 01:14:31.990 sys 0m0.036s 01:14:31.990 05:09:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 01:14:31.990 05:09:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 01:14:31.990 ************************************ 01:14:31.990 END TEST rpc_trace_cmd_test 01:14:31.990 ************************************ 01:14:32.259 05:09:14 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 01:14:32.259 05:09:14 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 01:14:32.259 05:09:14 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 01:14:32.259 05:09:14 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:14:32.259 05:09:14 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 01:14:32.259 05:09:14 rpc -- common/autotest_common.sh@10 -- # set +x 01:14:32.259 ************************************ 01:14:32.259 START TEST rpc_daemon_integrity 01:14:32.259 ************************************ 01:14:32.259 05:09:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 01:14:32.259 05:09:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 01:14:32.259 05:09:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:32.259 05:09:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 01:14:32.259 05:09:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:32.259 05:09:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 01:14:32.259 05:09:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 01:14:32.259 05:09:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 01:14:32.259 05:09:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 01:14:32.259 05:09:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:32.259 05:09:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 01:14:32.259 05:09:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:32.259 05:09:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 01:14:32.259 05:09:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 01:14:32.259 05:09:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:32.259 05:09:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 01:14:32.259 05:09:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:32.259 05:09:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 01:14:32.259 { 01:14:32.259 "name": "Malloc2", 01:14:32.259 "aliases": [ 01:14:32.259 "c8c2a5ab-a5f5-43a4-a3d6-95f1acac2c0f" 01:14:32.259 ], 01:14:32.259 "product_name": "Malloc disk", 01:14:32.259 "block_size": 512, 01:14:32.259 "num_blocks": 16384, 01:14:32.259 "uuid": "c8c2a5ab-a5f5-43a4-a3d6-95f1acac2c0f", 01:14:32.259 "assigned_rate_limits": { 01:14:32.259 "rw_ios_per_sec": 0, 01:14:32.259 "rw_mbytes_per_sec": 0, 01:14:32.259 "r_mbytes_per_sec": 0, 01:14:32.259 "w_mbytes_per_sec": 0 01:14:32.259 }, 01:14:32.259 "claimed": false, 01:14:32.259 "zoned": false, 01:14:32.259 "supported_io_types": { 01:14:32.259 "read": true, 01:14:32.259 "write": true, 01:14:32.259 "unmap": true, 01:14:32.259 "flush": true, 01:14:32.259 "reset": true, 01:14:32.259 "nvme_admin": false, 01:14:32.259 "nvme_io": false, 01:14:32.259 "nvme_io_md": false, 01:14:32.259 "write_zeroes": true, 01:14:32.259 "zcopy": true, 01:14:32.259 "get_zone_info": false, 01:14:32.259 "zone_management": false, 01:14:32.259 "zone_append": false, 01:14:32.259 "compare": false, 01:14:32.259 "compare_and_write": false, 01:14:32.259 "abort": true, 01:14:32.259 "seek_hole": false, 01:14:32.259 "seek_data": false, 01:14:32.259 "copy": true, 01:14:32.259 "nvme_iov_md": false 01:14:32.259 }, 01:14:32.259 "memory_domains": [ 01:14:32.259 { 01:14:32.259 "dma_device_id": "system", 01:14:32.259 "dma_device_type": 1 01:14:32.259 }, 01:14:32.259 { 01:14:32.259 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:14:32.259 "dma_device_type": 2 01:14:32.259 } 01:14:32.259 ], 01:14:32.259 "driver_specific": {} 01:14:32.259 } 01:14:32.259 ]' 01:14:32.259 05:09:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 01:14:32.259 05:09:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 01:14:32.259 05:09:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 01:14:32.259 05:09:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:32.259 05:09:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 01:14:32.259 [2024-12-09 05:09:14.667419] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 01:14:32.259 [2024-12-09 05:09:14.667479] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 01:14:32.259 [2024-12-09 05:09:14.667496] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x189a030 01:14:32.259 [2024-12-09 05:09:14.667503] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 01:14:32.259 [2024-12-09 05:09:14.668972] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 01:14:32.259 [2024-12-09 05:09:14.669010] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 01:14:32.259 Passthru0 01:14:32.259 05:09:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:32.259 05:09:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 01:14:32.259 05:09:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:32.259 05:09:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 01:14:32.259 05:09:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:32.259 05:09:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 01:14:32.259 { 01:14:32.259 "name": "Malloc2", 01:14:32.259 "aliases": [ 01:14:32.259 "c8c2a5ab-a5f5-43a4-a3d6-95f1acac2c0f" 01:14:32.259 ], 01:14:32.259 "product_name": "Malloc disk", 01:14:32.259 "block_size": 512, 01:14:32.259 "num_blocks": 16384, 01:14:32.259 "uuid": "c8c2a5ab-a5f5-43a4-a3d6-95f1acac2c0f", 01:14:32.259 "assigned_rate_limits": { 01:14:32.259 "rw_ios_per_sec": 0, 01:14:32.259 "rw_mbytes_per_sec": 0, 01:14:32.259 "r_mbytes_per_sec": 0, 01:14:32.259 "w_mbytes_per_sec": 0 01:14:32.259 }, 01:14:32.259 "claimed": true, 01:14:32.259 "claim_type": "exclusive_write", 01:14:32.259 "zoned": false, 01:14:32.259 "supported_io_types": { 01:14:32.259 "read": true, 01:14:32.259 "write": true, 01:14:32.259 "unmap": true, 01:14:32.259 "flush": true, 01:14:32.259 "reset": true, 01:14:32.259 "nvme_admin": false, 01:14:32.259 "nvme_io": false, 01:14:32.259 "nvme_io_md": false, 01:14:32.259 "write_zeroes": true, 01:14:32.259 "zcopy": true, 01:14:32.259 "get_zone_info": false, 01:14:32.259 "zone_management": false, 01:14:32.259 "zone_append": false, 01:14:32.260 "compare": false, 01:14:32.260 "compare_and_write": false, 01:14:32.260 "abort": true, 01:14:32.260 "seek_hole": false, 01:14:32.260 "seek_data": false, 01:14:32.260 "copy": true, 01:14:32.260 "nvme_iov_md": false 01:14:32.260 }, 01:14:32.260 "memory_domains": [ 01:14:32.260 { 01:14:32.260 "dma_device_id": "system", 01:14:32.260 "dma_device_type": 1 01:14:32.260 }, 01:14:32.260 { 01:14:32.260 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:14:32.260 "dma_device_type": 2 01:14:32.260 } 01:14:32.260 ], 01:14:32.260 "driver_specific": {} 01:14:32.260 }, 01:14:32.260 { 01:14:32.260 "name": "Passthru0", 01:14:32.260 "aliases": [ 01:14:32.260 "48486e4a-ccc2-5785-9d0b-2ecb33e8853c" 01:14:32.260 ], 01:14:32.260 "product_name": "passthru", 01:14:32.260 "block_size": 512, 01:14:32.260 "num_blocks": 16384, 01:14:32.260 "uuid": "48486e4a-ccc2-5785-9d0b-2ecb33e8853c", 01:14:32.260 "assigned_rate_limits": { 01:14:32.260 "rw_ios_per_sec": 0, 01:14:32.260 "rw_mbytes_per_sec": 0, 01:14:32.260 "r_mbytes_per_sec": 0, 01:14:32.260 "w_mbytes_per_sec": 0 01:14:32.260 }, 01:14:32.260 "claimed": false, 01:14:32.260 "zoned": false, 01:14:32.260 "supported_io_types": { 01:14:32.260 "read": true, 01:14:32.260 "write": true, 01:14:32.260 "unmap": true, 01:14:32.260 "flush": true, 01:14:32.260 "reset": true, 01:14:32.260 "nvme_admin": false, 01:14:32.260 "nvme_io": false, 01:14:32.260 "nvme_io_md": false, 01:14:32.260 "write_zeroes": true, 01:14:32.260 "zcopy": true, 01:14:32.260 "get_zone_info": false, 01:14:32.260 "zone_management": false, 01:14:32.260 "zone_append": false, 01:14:32.260 "compare": false, 01:14:32.260 "compare_and_write": false, 01:14:32.260 "abort": true, 01:14:32.260 "seek_hole": false, 01:14:32.260 "seek_data": false, 01:14:32.260 "copy": true, 01:14:32.260 "nvme_iov_md": false 01:14:32.260 }, 01:14:32.260 "memory_domains": [ 01:14:32.260 { 01:14:32.260 "dma_device_id": "system", 01:14:32.260 "dma_device_type": 1 01:14:32.260 }, 01:14:32.260 { 01:14:32.260 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 01:14:32.260 "dma_device_type": 2 01:14:32.260 } 01:14:32.260 ], 01:14:32.260 "driver_specific": { 01:14:32.260 "passthru": { 01:14:32.260 "name": "Passthru0", 01:14:32.260 "base_bdev_name": "Malloc2" 01:14:32.260 } 01:14:32.260 } 01:14:32.260 } 01:14:32.260 ]' 01:14:32.260 05:09:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 01:14:32.528 05:09:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 01:14:32.528 05:09:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 01:14:32.528 05:09:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:32.528 05:09:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 01:14:32.528 05:09:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:32.528 05:09:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 01:14:32.528 05:09:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:32.528 05:09:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 01:14:32.528 05:09:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:32.528 05:09:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 01:14:32.528 05:09:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:32.528 05:09:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 01:14:32.528 05:09:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:32.528 05:09:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 01:14:32.528 05:09:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 01:14:32.528 05:09:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 01:14:32.528 01:14:32.528 real 0m0.313s 01:14:32.528 user 0m0.184s 01:14:32.528 sys 0m0.055s 01:14:32.528 05:09:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 01:14:32.528 05:09:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 01:14:32.528 ************************************ 01:14:32.528 END TEST rpc_daemon_integrity 01:14:32.528 ************************************ 01:14:32.528 05:09:14 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 01:14:32.528 05:09:14 rpc -- rpc/rpc.sh@84 -- # killprocess 56878 01:14:32.528 05:09:14 rpc -- common/autotest_common.sh@954 -- # '[' -z 56878 ']' 01:14:32.528 05:09:14 rpc -- common/autotest_common.sh@958 -- # kill -0 56878 01:14:32.528 05:09:14 rpc -- common/autotest_common.sh@959 -- # uname 01:14:32.528 05:09:14 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:14:32.528 05:09:14 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56878 01:14:32.528 05:09:14 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:14:32.528 05:09:14 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:14:32.528 killing process with pid 56878 01:14:32.528 05:09:14 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56878' 01:14:32.528 05:09:14 rpc -- common/autotest_common.sh@973 -- # kill 56878 01:14:32.528 05:09:14 rpc -- common/autotest_common.sh@978 -- # wait 56878 01:14:33.096 01:14:33.096 real 0m2.883s 01:14:33.096 user 0m3.539s 01:14:33.096 sys 0m0.809s 01:14:33.096 05:09:15 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 01:14:33.096 05:09:15 rpc -- common/autotest_common.sh@10 -- # set +x 01:14:33.096 ************************************ 01:14:33.096 END TEST rpc 01:14:33.096 ************************************ 01:14:33.096 05:09:15 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 01:14:33.096 05:09:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:14:33.096 05:09:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:14:33.096 05:09:15 -- common/autotest_common.sh@10 -- # set +x 01:14:33.096 ************************************ 01:14:33.096 START TEST skip_rpc 01:14:33.096 ************************************ 01:14:33.096 05:09:15 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 01:14:33.096 * Looking for test storage... 01:14:33.096 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 01:14:33.096 05:09:15 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:14:33.096 05:09:15 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 01:14:33.096 05:09:15 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:14:33.357 05:09:15 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:14:33.357 05:09:15 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:14:33.357 05:09:15 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 01:14:33.357 05:09:15 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 01:14:33.357 05:09:15 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 01:14:33.357 05:09:15 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 01:14:33.357 05:09:15 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 01:14:33.357 05:09:15 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 01:14:33.357 05:09:15 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 01:14:33.357 05:09:15 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 01:14:33.357 05:09:15 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 01:14:33.357 05:09:15 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:14:33.357 05:09:15 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 01:14:33.357 05:09:15 skip_rpc -- scripts/common.sh@345 -- # : 1 01:14:33.357 05:09:15 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 01:14:33.357 05:09:15 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:14:33.357 05:09:15 skip_rpc -- scripts/common.sh@365 -- # decimal 1 01:14:33.357 05:09:15 skip_rpc -- scripts/common.sh@353 -- # local d=1 01:14:33.357 05:09:15 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:14:33.357 05:09:15 skip_rpc -- scripts/common.sh@355 -- # echo 1 01:14:33.357 05:09:15 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 01:14:33.357 05:09:15 skip_rpc -- scripts/common.sh@366 -- # decimal 2 01:14:33.357 05:09:15 skip_rpc -- scripts/common.sh@353 -- # local d=2 01:14:33.357 05:09:15 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:14:33.357 05:09:15 skip_rpc -- scripts/common.sh@355 -- # echo 2 01:14:33.357 05:09:15 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 01:14:33.357 05:09:15 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:14:33.357 05:09:15 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:14:33.357 05:09:15 skip_rpc -- scripts/common.sh@368 -- # return 0 01:14:33.357 05:09:15 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:14:33.357 05:09:15 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:14:33.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:14:33.357 --rc genhtml_branch_coverage=1 01:14:33.357 --rc genhtml_function_coverage=1 01:14:33.357 --rc genhtml_legend=1 01:14:33.357 --rc geninfo_all_blocks=1 01:14:33.357 --rc geninfo_unexecuted_blocks=1 01:14:33.357 01:14:33.357 ' 01:14:33.357 05:09:15 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:14:33.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:14:33.357 --rc genhtml_branch_coverage=1 01:14:33.357 --rc genhtml_function_coverage=1 01:14:33.357 --rc genhtml_legend=1 01:14:33.357 --rc geninfo_all_blocks=1 01:14:33.357 --rc geninfo_unexecuted_blocks=1 01:14:33.357 01:14:33.357 ' 01:14:33.357 05:09:15 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:14:33.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:14:33.357 --rc genhtml_branch_coverage=1 01:14:33.357 --rc genhtml_function_coverage=1 01:14:33.357 --rc genhtml_legend=1 01:14:33.357 --rc geninfo_all_blocks=1 01:14:33.357 --rc geninfo_unexecuted_blocks=1 01:14:33.357 01:14:33.357 ' 01:14:33.357 05:09:15 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:14:33.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:14:33.357 --rc genhtml_branch_coverage=1 01:14:33.357 --rc genhtml_function_coverage=1 01:14:33.357 --rc genhtml_legend=1 01:14:33.357 --rc geninfo_all_blocks=1 01:14:33.357 --rc geninfo_unexecuted_blocks=1 01:14:33.357 01:14:33.357 ' 01:14:33.357 05:09:15 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 01:14:33.357 05:09:15 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 01:14:33.357 05:09:15 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 01:14:33.357 05:09:15 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:14:33.357 05:09:15 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 01:14:33.357 05:09:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 01:14:33.357 ************************************ 01:14:33.357 START TEST skip_rpc 01:14:33.357 ************************************ 01:14:33.357 05:09:15 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 01:14:33.357 05:09:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57083 01:14:33.357 05:09:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 01:14:33.357 05:09:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 01:14:33.357 05:09:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 01:14:33.357 [2024-12-09 05:09:15.655193] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:14:33.357 [2024-12-09 05:09:15.655282] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57083 ] 01:14:33.357 [2024-12-09 05:09:15.808150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:14:33.617 [2024-12-09 05:09:15.864740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:14:33.617 [2024-12-09 05:09:15.923900] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:14:38.916 05:09:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 01:14:38.916 05:09:20 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 01:14:38.916 05:09:20 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 01:14:38.916 05:09:20 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:14:38.916 05:09:20 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:14:38.916 05:09:20 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:14:38.916 05:09:20 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:14:38.916 05:09:20 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 01:14:38.916 05:09:20 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:38.916 05:09:20 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 01:14:38.916 05:09:20 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:14:38.916 05:09:20 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 01:14:38.916 05:09:20 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:14:38.916 05:09:20 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:14:38.916 05:09:20 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:14:38.916 05:09:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 01:14:38.916 05:09:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57083 01:14:38.916 05:09:20 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57083 ']' 01:14:38.916 05:09:20 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57083 01:14:38.916 05:09:20 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 01:14:38.916 05:09:20 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:14:38.916 05:09:20 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57083 01:14:38.916 05:09:20 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:14:38.916 05:09:20 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:14:38.916 killing process with pid 57083 01:14:38.916 05:09:20 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57083' 01:14:38.916 05:09:20 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57083 01:14:38.916 05:09:20 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57083 01:14:38.916 01:14:38.916 real 0m5.407s 01:14:38.916 user 0m5.072s 01:14:38.916 sys 0m0.257s 01:14:38.916 05:09:20 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 01:14:38.916 05:09:20 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 01:14:38.916 ************************************ 01:14:38.916 END TEST skip_rpc 01:14:38.916 ************************************ 01:14:38.916 05:09:21 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 01:14:38.916 05:09:21 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:14:38.916 05:09:21 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 01:14:38.916 05:09:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 01:14:38.916 ************************************ 01:14:38.916 START TEST skip_rpc_with_json 01:14:38.916 ************************************ 01:14:38.916 05:09:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 01:14:38.916 05:09:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 01:14:38.916 05:09:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57165 01:14:38.916 05:09:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 01:14:38.916 05:09:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 01:14:38.916 05:09:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57165 01:14:38.916 05:09:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57165 ']' 01:14:38.916 05:09:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:14:38.916 05:09:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 01:14:38.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:14:38.916 05:09:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:14:38.916 05:09:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 01:14:38.916 05:09:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 01:14:38.917 [2024-12-09 05:09:21.123283] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:14:38.917 [2024-12-09 05:09:21.123377] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57165 ] 01:14:38.917 [2024-12-09 05:09:21.276512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:14:38.917 [2024-12-09 05:09:21.322432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:14:39.175 [2024-12-09 05:09:21.377595] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:14:39.746 05:09:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:14:39.746 05:09:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 01:14:39.746 05:09:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 01:14:39.746 05:09:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:39.746 05:09:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 01:14:39.746 [2024-12-09 05:09:21.975981] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 01:14:39.746 request: 01:14:39.746 { 01:14:39.746 "trtype": "tcp", 01:14:39.746 "method": "nvmf_get_transports", 01:14:39.746 "req_id": 1 01:14:39.746 } 01:14:39.746 Got JSON-RPC error response 01:14:39.746 response: 01:14:39.746 { 01:14:39.746 "code": -19, 01:14:39.746 "message": "No such device" 01:14:39.746 } 01:14:39.746 05:09:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:14:39.746 05:09:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 01:14:39.746 05:09:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:39.746 05:09:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 01:14:39.746 [2024-12-09 05:09:21.988060] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:14:39.746 05:09:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:39.746 05:09:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 01:14:39.746 05:09:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:39.746 05:09:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 01:14:39.746 05:09:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:39.746 05:09:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 01:14:39.746 { 01:14:39.746 "subsystems": [ 01:14:39.746 { 01:14:39.746 "subsystem": "fsdev", 01:14:39.746 "config": [ 01:14:39.746 { 01:14:39.746 "method": "fsdev_set_opts", 01:14:39.746 "params": { 01:14:39.746 "fsdev_io_pool_size": 65535, 01:14:39.746 "fsdev_io_cache_size": 256 01:14:39.746 } 01:14:39.746 } 01:14:39.746 ] 01:14:39.746 }, 01:14:39.746 { 01:14:39.746 "subsystem": "keyring", 01:14:39.746 "config": [] 01:14:39.746 }, 01:14:39.746 { 01:14:39.746 "subsystem": "iobuf", 01:14:39.746 "config": [ 01:14:39.746 { 01:14:39.746 "method": "iobuf_set_options", 01:14:39.746 "params": { 01:14:39.746 "small_pool_count": 8192, 01:14:39.746 "large_pool_count": 1024, 01:14:39.746 "small_bufsize": 8192, 01:14:39.746 "large_bufsize": 135168, 01:14:39.746 "enable_numa": false 01:14:39.746 } 01:14:39.746 } 01:14:39.746 ] 01:14:39.746 }, 01:14:39.746 { 01:14:39.746 "subsystem": "sock", 01:14:39.746 "config": [ 01:14:39.746 { 01:14:39.746 "method": "sock_set_default_impl", 01:14:39.746 "params": { 01:14:39.746 "impl_name": "uring" 01:14:39.746 } 01:14:39.746 }, 01:14:39.746 { 01:14:39.746 "method": "sock_impl_set_options", 01:14:39.746 "params": { 01:14:39.746 "impl_name": "ssl", 01:14:39.746 "recv_buf_size": 4096, 01:14:39.746 "send_buf_size": 4096, 01:14:39.746 "enable_recv_pipe": true, 01:14:39.746 "enable_quickack": false, 01:14:39.746 "enable_placement_id": 0, 01:14:39.746 "enable_zerocopy_send_server": true, 01:14:39.746 "enable_zerocopy_send_client": false, 01:14:39.746 "zerocopy_threshold": 0, 01:14:39.746 "tls_version": 0, 01:14:39.746 "enable_ktls": false 01:14:39.746 } 01:14:39.746 }, 01:14:39.746 { 01:14:39.746 "method": "sock_impl_set_options", 01:14:39.746 "params": { 01:14:39.746 "impl_name": "posix", 01:14:39.746 "recv_buf_size": 2097152, 01:14:39.746 "send_buf_size": 2097152, 01:14:39.746 "enable_recv_pipe": true, 01:14:39.746 "enable_quickack": false, 01:14:39.746 "enable_placement_id": 0, 01:14:39.746 "enable_zerocopy_send_server": true, 01:14:39.746 "enable_zerocopy_send_client": false, 01:14:39.746 "zerocopy_threshold": 0, 01:14:39.746 "tls_version": 0, 01:14:39.746 "enable_ktls": false 01:14:39.746 } 01:14:39.746 }, 01:14:39.746 { 01:14:39.746 "method": "sock_impl_set_options", 01:14:39.746 "params": { 01:14:39.746 "impl_name": "uring", 01:14:39.746 "recv_buf_size": 2097152, 01:14:39.746 "send_buf_size": 2097152, 01:14:39.746 "enable_recv_pipe": true, 01:14:39.746 "enable_quickack": false, 01:14:39.746 "enable_placement_id": 0, 01:14:39.746 "enable_zerocopy_send_server": false, 01:14:39.746 "enable_zerocopy_send_client": false, 01:14:39.746 "zerocopy_threshold": 0, 01:14:39.746 "tls_version": 0, 01:14:39.746 "enable_ktls": false 01:14:39.746 } 01:14:39.746 } 01:14:39.746 ] 01:14:39.746 }, 01:14:39.746 { 01:14:39.746 "subsystem": "vmd", 01:14:39.746 "config": [] 01:14:39.746 }, 01:14:39.746 { 01:14:39.746 "subsystem": "accel", 01:14:39.746 "config": [ 01:14:39.746 { 01:14:39.746 "method": "accel_set_options", 01:14:39.746 "params": { 01:14:39.746 "small_cache_size": 128, 01:14:39.746 "large_cache_size": 16, 01:14:39.746 "task_count": 2048, 01:14:39.746 "sequence_count": 2048, 01:14:39.746 "buf_count": 2048 01:14:39.746 } 01:14:39.746 } 01:14:39.746 ] 01:14:39.747 }, 01:14:39.747 { 01:14:39.747 "subsystem": "bdev", 01:14:39.747 "config": [ 01:14:39.747 { 01:14:39.747 "method": "bdev_set_options", 01:14:39.747 "params": { 01:14:39.747 "bdev_io_pool_size": 65535, 01:14:39.747 "bdev_io_cache_size": 256, 01:14:39.747 "bdev_auto_examine": true, 01:14:39.747 "iobuf_small_cache_size": 128, 01:14:39.747 "iobuf_large_cache_size": 16 01:14:39.747 } 01:14:39.747 }, 01:14:39.747 { 01:14:39.747 "method": "bdev_raid_set_options", 01:14:39.747 "params": { 01:14:39.747 "process_window_size_kb": 1024, 01:14:39.747 "process_max_bandwidth_mb_sec": 0 01:14:39.747 } 01:14:39.747 }, 01:14:39.747 { 01:14:39.747 "method": "bdev_iscsi_set_options", 01:14:39.747 "params": { 01:14:39.747 "timeout_sec": 30 01:14:39.747 } 01:14:39.747 }, 01:14:39.747 { 01:14:39.747 "method": "bdev_nvme_set_options", 01:14:39.747 "params": { 01:14:39.747 "action_on_timeout": "none", 01:14:39.747 "timeout_us": 0, 01:14:39.747 "timeout_admin_us": 0, 01:14:39.747 "keep_alive_timeout_ms": 10000, 01:14:39.747 "arbitration_burst": 0, 01:14:39.747 "low_priority_weight": 0, 01:14:39.747 "medium_priority_weight": 0, 01:14:39.747 "high_priority_weight": 0, 01:14:39.747 "nvme_adminq_poll_period_us": 10000, 01:14:39.747 "nvme_ioq_poll_period_us": 0, 01:14:39.747 "io_queue_requests": 0, 01:14:39.747 "delay_cmd_submit": true, 01:14:39.747 "transport_retry_count": 4, 01:14:39.747 "bdev_retry_count": 3, 01:14:39.747 "transport_ack_timeout": 0, 01:14:39.747 "ctrlr_loss_timeout_sec": 0, 01:14:39.747 "reconnect_delay_sec": 0, 01:14:39.747 "fast_io_fail_timeout_sec": 0, 01:14:39.747 "disable_auto_failback": false, 01:14:39.747 "generate_uuids": false, 01:14:39.747 "transport_tos": 0, 01:14:39.747 "nvme_error_stat": false, 01:14:39.747 "rdma_srq_size": 0, 01:14:39.747 "io_path_stat": false, 01:14:39.747 "allow_accel_sequence": false, 01:14:39.747 "rdma_max_cq_size": 0, 01:14:39.747 "rdma_cm_event_timeout_ms": 0, 01:14:39.747 "dhchap_digests": [ 01:14:39.747 "sha256", 01:14:39.747 "sha384", 01:14:39.747 "sha512" 01:14:39.747 ], 01:14:39.747 "dhchap_dhgroups": [ 01:14:39.747 "null", 01:14:39.747 "ffdhe2048", 01:14:39.747 "ffdhe3072", 01:14:39.747 "ffdhe4096", 01:14:39.747 "ffdhe6144", 01:14:39.747 "ffdhe8192" 01:14:39.747 ] 01:14:39.747 } 01:14:39.747 }, 01:14:39.747 { 01:14:39.747 "method": "bdev_nvme_set_hotplug", 01:14:39.747 "params": { 01:14:39.747 "period_us": 100000, 01:14:39.747 "enable": false 01:14:39.747 } 01:14:39.747 }, 01:14:39.747 { 01:14:39.747 "method": "bdev_wait_for_examine" 01:14:39.747 } 01:14:39.747 ] 01:14:39.747 }, 01:14:39.747 { 01:14:39.747 "subsystem": "scsi", 01:14:39.747 "config": null 01:14:39.747 }, 01:14:39.747 { 01:14:39.747 "subsystem": "scheduler", 01:14:39.747 "config": [ 01:14:39.747 { 01:14:39.747 "method": "framework_set_scheduler", 01:14:39.747 "params": { 01:14:39.747 "name": "static" 01:14:39.747 } 01:14:39.747 } 01:14:39.747 ] 01:14:39.747 }, 01:14:39.747 { 01:14:39.747 "subsystem": "vhost_scsi", 01:14:39.747 "config": [] 01:14:39.747 }, 01:14:39.747 { 01:14:39.747 "subsystem": "vhost_blk", 01:14:39.747 "config": [] 01:14:39.747 }, 01:14:39.747 { 01:14:39.747 "subsystem": "ublk", 01:14:39.747 "config": [] 01:14:39.747 }, 01:14:39.747 { 01:14:39.747 "subsystem": "nbd", 01:14:39.747 "config": [] 01:14:39.747 }, 01:14:39.747 { 01:14:39.747 "subsystem": "nvmf", 01:14:39.747 "config": [ 01:14:39.747 { 01:14:39.747 "method": "nvmf_set_config", 01:14:39.747 "params": { 01:14:39.747 "discovery_filter": "match_any", 01:14:39.747 "admin_cmd_passthru": { 01:14:39.747 "identify_ctrlr": false 01:14:39.747 }, 01:14:39.747 "dhchap_digests": [ 01:14:39.747 "sha256", 01:14:39.747 "sha384", 01:14:39.747 "sha512" 01:14:39.747 ], 01:14:39.747 "dhchap_dhgroups": [ 01:14:39.747 "null", 01:14:39.747 "ffdhe2048", 01:14:39.747 "ffdhe3072", 01:14:39.747 "ffdhe4096", 01:14:39.747 "ffdhe6144", 01:14:39.747 "ffdhe8192" 01:14:39.747 ] 01:14:39.747 } 01:14:39.747 }, 01:14:39.747 { 01:14:39.747 "method": "nvmf_set_max_subsystems", 01:14:39.747 "params": { 01:14:39.747 "max_subsystems": 1024 01:14:39.747 } 01:14:39.747 }, 01:14:39.747 { 01:14:39.747 "method": "nvmf_set_crdt", 01:14:39.747 "params": { 01:14:39.747 "crdt1": 0, 01:14:39.747 "crdt2": 0, 01:14:39.747 "crdt3": 0 01:14:39.747 } 01:14:39.747 }, 01:14:39.747 { 01:14:39.747 "method": "nvmf_create_transport", 01:14:39.747 "params": { 01:14:39.747 "trtype": "TCP", 01:14:39.747 "max_queue_depth": 128, 01:14:39.747 "max_io_qpairs_per_ctrlr": 127, 01:14:39.747 "in_capsule_data_size": 4096, 01:14:39.747 "max_io_size": 131072, 01:14:39.747 "io_unit_size": 131072, 01:14:39.747 "max_aq_depth": 128, 01:14:39.747 "num_shared_buffers": 511, 01:14:39.747 "buf_cache_size": 4294967295, 01:14:39.747 "dif_insert_or_strip": false, 01:14:39.747 "zcopy": false, 01:14:39.747 "c2h_success": true, 01:14:39.747 "sock_priority": 0, 01:14:39.747 "abort_timeout_sec": 1, 01:14:39.747 "ack_timeout": 0, 01:14:39.747 "data_wr_pool_size": 0 01:14:39.747 } 01:14:39.747 } 01:14:39.747 ] 01:14:39.747 }, 01:14:39.747 { 01:14:39.747 "subsystem": "iscsi", 01:14:39.747 "config": [ 01:14:39.747 { 01:14:39.747 "method": "iscsi_set_options", 01:14:39.747 "params": { 01:14:39.747 "node_base": "iqn.2016-06.io.spdk", 01:14:39.747 "max_sessions": 128, 01:14:39.747 "max_connections_per_session": 2, 01:14:39.747 "max_queue_depth": 64, 01:14:39.747 "default_time2wait": 2, 01:14:39.747 "default_time2retain": 20, 01:14:39.747 "first_burst_length": 8192, 01:14:39.747 "immediate_data": true, 01:14:39.747 "allow_duplicated_isid": false, 01:14:39.747 "error_recovery_level": 0, 01:14:39.747 "nop_timeout": 60, 01:14:39.747 "nop_in_interval": 30, 01:14:39.747 "disable_chap": false, 01:14:39.747 "require_chap": false, 01:14:39.747 "mutual_chap": false, 01:14:39.747 "chap_group": 0, 01:14:39.747 "max_large_datain_per_connection": 64, 01:14:39.747 "max_r2t_per_connection": 4, 01:14:39.747 "pdu_pool_size": 36864, 01:14:39.747 "immediate_data_pool_size": 16384, 01:14:39.747 "data_out_pool_size": 2048 01:14:39.747 } 01:14:39.747 } 01:14:39.747 ] 01:14:39.747 } 01:14:39.747 ] 01:14:39.747 } 01:14:39.747 05:09:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 01:14:39.747 05:09:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57165 01:14:39.747 05:09:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57165 ']' 01:14:39.747 05:09:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57165 01:14:39.747 05:09:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 01:14:39.747 05:09:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:14:39.747 05:09:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57165 01:14:40.033 05:09:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:14:40.033 05:09:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:14:40.033 killing process with pid 57165 01:14:40.033 05:09:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57165' 01:14:40.034 05:09:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57165 01:14:40.034 05:09:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57165 01:14:40.308 05:09:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57193 01:14:40.308 05:09:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 01:14:40.308 05:09:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 01:14:45.580 05:09:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57193 01:14:45.580 05:09:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57193 ']' 01:14:45.580 05:09:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57193 01:14:45.580 05:09:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 01:14:45.580 05:09:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:14:45.580 05:09:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57193 01:14:45.580 05:09:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:14:45.580 05:09:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:14:45.580 killing process with pid 57193 01:14:45.580 05:09:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57193' 01:14:45.580 05:09:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57193 01:14:45.580 05:09:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57193 01:14:45.580 05:09:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 01:14:45.580 05:09:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 01:14:45.580 01:14:45.580 real 0m6.894s 01:14:45.580 user 0m6.583s 01:14:45.580 sys 0m0.599s 01:14:45.580 05:09:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 01:14:45.580 05:09:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 01:14:45.580 ************************************ 01:14:45.580 END TEST skip_rpc_with_json 01:14:45.580 ************************************ 01:14:45.580 05:09:28 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 01:14:45.580 05:09:28 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:14:45.580 05:09:28 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 01:14:45.580 05:09:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 01:14:45.580 ************************************ 01:14:45.580 START TEST skip_rpc_with_delay 01:14:45.580 ************************************ 01:14:45.580 05:09:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 01:14:45.580 05:09:28 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 01:14:45.580 05:09:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 01:14:45.580 05:09:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 01:14:45.580 05:09:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:14:45.580 05:09:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:14:45.580 05:09:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:14:45.580 05:09:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:14:45.580 05:09:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:14:45.580 05:09:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:14:45.580 05:09:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:14:45.580 05:09:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 01:14:45.580 05:09:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 01:14:45.841 [2024-12-09 05:09:28.092806] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 01:14:45.841 05:09:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 01:14:45.841 05:09:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:14:45.841 05:09:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:14:45.841 05:09:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:14:45.841 01:14:45.841 real 0m0.080s 01:14:45.841 user 0m0.053s 01:14:45.841 sys 0m0.027s 01:14:45.841 05:09:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 01:14:45.841 05:09:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 01:14:45.841 ************************************ 01:14:45.841 END TEST skip_rpc_with_delay 01:14:45.841 ************************************ 01:14:45.841 05:09:28 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 01:14:45.841 05:09:28 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 01:14:45.841 05:09:28 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 01:14:45.841 05:09:28 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:14:45.841 05:09:28 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 01:14:45.841 05:09:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 01:14:45.841 ************************************ 01:14:45.841 START TEST exit_on_failed_rpc_init 01:14:45.841 ************************************ 01:14:45.841 05:09:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 01:14:45.841 05:09:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57298 01:14:45.841 05:09:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 01:14:45.841 05:09:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57298 01:14:45.841 05:09:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57298 ']' 01:14:45.841 05:09:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:14:45.841 05:09:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 01:14:45.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:14:45.841 05:09:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:14:45.841 05:09:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 01:14:45.841 05:09:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 01:14:45.841 [2024-12-09 05:09:28.239675] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:14:45.841 [2024-12-09 05:09:28.239775] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57298 ] 01:14:46.100 [2024-12-09 05:09:28.392623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:14:46.100 [2024-12-09 05:09:28.436547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:14:46.100 [2024-12-09 05:09:28.491480] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:14:46.670 05:09:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:14:46.670 05:09:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 01:14:46.670 05:09:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 01:14:46.670 05:09:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 01:14:46.670 05:09:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 01:14:46.670 05:09:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 01:14:46.671 05:09:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:14:46.671 05:09:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:14:46.671 05:09:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:14:46.671 05:09:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:14:46.671 05:09:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:14:46.671 05:09:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:14:46.671 05:09:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:14:46.671 05:09:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 01:14:46.671 05:09:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 01:14:46.930 [2024-12-09 05:09:29.180027] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:14:46.930 [2024-12-09 05:09:29.180422] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57315 ] 01:14:46.930 [2024-12-09 05:09:29.329234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:14:47.190 [2024-12-09 05:09:29.405798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:14:47.190 [2024-12-09 05:09:29.405895] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 01:14:47.190 [2024-12-09 05:09:29.405904] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 01:14:47.190 [2024-12-09 05:09:29.405909] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:14:47.190 05:09:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 01:14:47.190 05:09:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:14:47.190 05:09:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 01:14:47.190 05:09:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 01:14:47.190 05:09:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 01:14:47.190 05:09:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:14:47.190 05:09:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 01:14:47.190 05:09:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57298 01:14:47.190 05:09:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57298 ']' 01:14:47.190 05:09:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57298 01:14:47.190 05:09:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 01:14:47.190 05:09:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:14:47.190 05:09:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57298 01:14:47.190 05:09:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:14:47.190 05:09:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:14:47.190 killing process with pid 57298 01:14:47.190 05:09:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57298' 01:14:47.190 05:09:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57298 01:14:47.190 05:09:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57298 01:14:47.759 01:14:47.760 real 0m1.735s 01:14:47.760 user 0m1.998s 01:14:47.760 sys 0m0.393s 01:14:47.760 05:09:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 01:14:47.760 05:09:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 01:14:47.760 ************************************ 01:14:47.760 END TEST exit_on_failed_rpc_init 01:14:47.760 ************************************ 01:14:47.760 05:09:29 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 01:14:47.760 01:14:47.760 real 0m14.620s 01:14:47.760 user 0m13.924s 01:14:47.760 sys 0m1.573s 01:14:47.760 05:09:29 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 01:14:47.760 05:09:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 01:14:47.760 ************************************ 01:14:47.760 END TEST skip_rpc 01:14:47.760 ************************************ 01:14:47.760 05:09:30 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 01:14:47.760 05:09:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:14:47.760 05:09:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:14:47.760 05:09:30 -- common/autotest_common.sh@10 -- # set +x 01:14:47.760 ************************************ 01:14:47.760 START TEST rpc_client 01:14:47.760 ************************************ 01:14:47.760 05:09:30 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 01:14:47.760 * Looking for test storage... 01:14:47.760 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 01:14:47.760 05:09:30 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:14:47.760 05:09:30 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 01:14:47.760 05:09:30 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:14:48.020 05:09:30 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:14:48.020 05:09:30 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:14:48.020 05:09:30 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 01:14:48.020 05:09:30 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 01:14:48.020 05:09:30 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 01:14:48.020 05:09:30 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 01:14:48.020 05:09:30 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 01:14:48.020 05:09:30 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 01:14:48.020 05:09:30 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 01:14:48.020 05:09:30 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 01:14:48.020 05:09:30 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 01:14:48.020 05:09:30 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:14:48.020 05:09:30 rpc_client -- scripts/common.sh@344 -- # case "$op" in 01:14:48.020 05:09:30 rpc_client -- scripts/common.sh@345 -- # : 1 01:14:48.020 05:09:30 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 01:14:48.020 05:09:30 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:14:48.020 05:09:30 rpc_client -- scripts/common.sh@365 -- # decimal 1 01:14:48.020 05:09:30 rpc_client -- scripts/common.sh@353 -- # local d=1 01:14:48.020 05:09:30 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:14:48.020 05:09:30 rpc_client -- scripts/common.sh@355 -- # echo 1 01:14:48.020 05:09:30 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 01:14:48.020 05:09:30 rpc_client -- scripts/common.sh@366 -- # decimal 2 01:14:48.020 05:09:30 rpc_client -- scripts/common.sh@353 -- # local d=2 01:14:48.020 05:09:30 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:14:48.020 05:09:30 rpc_client -- scripts/common.sh@355 -- # echo 2 01:14:48.020 05:09:30 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 01:14:48.020 05:09:30 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:14:48.020 05:09:30 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:14:48.020 05:09:30 rpc_client -- scripts/common.sh@368 -- # return 0 01:14:48.020 05:09:30 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:14:48.020 05:09:30 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:14:48.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:14:48.020 --rc genhtml_branch_coverage=1 01:14:48.020 --rc genhtml_function_coverage=1 01:14:48.020 --rc genhtml_legend=1 01:14:48.020 --rc geninfo_all_blocks=1 01:14:48.020 --rc geninfo_unexecuted_blocks=1 01:14:48.020 01:14:48.020 ' 01:14:48.020 05:09:30 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:14:48.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:14:48.020 --rc genhtml_branch_coverage=1 01:14:48.020 --rc genhtml_function_coverage=1 01:14:48.020 --rc genhtml_legend=1 01:14:48.020 --rc geninfo_all_blocks=1 01:14:48.020 --rc geninfo_unexecuted_blocks=1 01:14:48.020 01:14:48.020 ' 01:14:48.020 05:09:30 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:14:48.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:14:48.020 --rc genhtml_branch_coverage=1 01:14:48.020 --rc genhtml_function_coverage=1 01:14:48.020 --rc genhtml_legend=1 01:14:48.020 --rc geninfo_all_blocks=1 01:14:48.020 --rc geninfo_unexecuted_blocks=1 01:14:48.020 01:14:48.020 ' 01:14:48.020 05:09:30 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:14:48.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:14:48.020 --rc genhtml_branch_coverage=1 01:14:48.020 --rc genhtml_function_coverage=1 01:14:48.020 --rc genhtml_legend=1 01:14:48.020 --rc geninfo_all_blocks=1 01:14:48.020 --rc geninfo_unexecuted_blocks=1 01:14:48.020 01:14:48.020 ' 01:14:48.020 05:09:30 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 01:14:48.020 OK 01:14:48.020 05:09:30 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 01:14:48.020 01:14:48.020 real 0m0.242s 01:14:48.020 user 0m0.137s 01:14:48.020 sys 0m0.124s 01:14:48.020 05:09:30 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 01:14:48.020 05:09:30 rpc_client -- common/autotest_common.sh@10 -- # set +x 01:14:48.020 ************************************ 01:14:48.020 END TEST rpc_client 01:14:48.020 ************************************ 01:14:48.020 05:09:30 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 01:14:48.020 05:09:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:14:48.020 05:09:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:14:48.020 05:09:30 -- common/autotest_common.sh@10 -- # set +x 01:14:48.020 ************************************ 01:14:48.020 START TEST json_config 01:14:48.020 ************************************ 01:14:48.020 05:09:30 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 01:14:48.020 05:09:30 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:14:48.020 05:09:30 json_config -- common/autotest_common.sh@1693 -- # lcov --version 01:14:48.020 05:09:30 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:14:48.280 05:09:30 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:14:48.280 05:09:30 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:14:48.280 05:09:30 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 01:14:48.280 05:09:30 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 01:14:48.281 05:09:30 json_config -- scripts/common.sh@336 -- # IFS=.-: 01:14:48.281 05:09:30 json_config -- scripts/common.sh@336 -- # read -ra ver1 01:14:48.281 05:09:30 json_config -- scripts/common.sh@337 -- # IFS=.-: 01:14:48.281 05:09:30 json_config -- scripts/common.sh@337 -- # read -ra ver2 01:14:48.281 05:09:30 json_config -- scripts/common.sh@338 -- # local 'op=<' 01:14:48.281 05:09:30 json_config -- scripts/common.sh@340 -- # ver1_l=2 01:14:48.281 05:09:30 json_config -- scripts/common.sh@341 -- # ver2_l=1 01:14:48.281 05:09:30 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:14:48.281 05:09:30 json_config -- scripts/common.sh@344 -- # case "$op" in 01:14:48.281 05:09:30 json_config -- scripts/common.sh@345 -- # : 1 01:14:48.281 05:09:30 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 01:14:48.281 05:09:30 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:14:48.281 05:09:30 json_config -- scripts/common.sh@365 -- # decimal 1 01:14:48.281 05:09:30 json_config -- scripts/common.sh@353 -- # local d=1 01:14:48.281 05:09:30 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:14:48.281 05:09:30 json_config -- scripts/common.sh@355 -- # echo 1 01:14:48.281 05:09:30 json_config -- scripts/common.sh@365 -- # ver1[v]=1 01:14:48.281 05:09:30 json_config -- scripts/common.sh@366 -- # decimal 2 01:14:48.281 05:09:30 json_config -- scripts/common.sh@353 -- # local d=2 01:14:48.281 05:09:30 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:14:48.281 05:09:30 json_config -- scripts/common.sh@355 -- # echo 2 01:14:48.281 05:09:30 json_config -- scripts/common.sh@366 -- # ver2[v]=2 01:14:48.281 05:09:30 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:14:48.281 05:09:30 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:14:48.281 05:09:30 json_config -- scripts/common.sh@368 -- # return 0 01:14:48.281 05:09:30 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:14:48.281 05:09:30 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:14:48.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:14:48.281 --rc genhtml_branch_coverage=1 01:14:48.281 --rc genhtml_function_coverage=1 01:14:48.281 --rc genhtml_legend=1 01:14:48.281 --rc geninfo_all_blocks=1 01:14:48.281 --rc geninfo_unexecuted_blocks=1 01:14:48.281 01:14:48.281 ' 01:14:48.281 05:09:30 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:14:48.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:14:48.281 --rc genhtml_branch_coverage=1 01:14:48.281 --rc genhtml_function_coverage=1 01:14:48.281 --rc genhtml_legend=1 01:14:48.281 --rc geninfo_all_blocks=1 01:14:48.281 --rc geninfo_unexecuted_blocks=1 01:14:48.281 01:14:48.281 ' 01:14:48.281 05:09:30 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:14:48.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:14:48.281 --rc genhtml_branch_coverage=1 01:14:48.281 --rc genhtml_function_coverage=1 01:14:48.281 --rc genhtml_legend=1 01:14:48.281 --rc geninfo_all_blocks=1 01:14:48.281 --rc geninfo_unexecuted_blocks=1 01:14:48.281 01:14:48.281 ' 01:14:48.281 05:09:30 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:14:48.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:14:48.281 --rc genhtml_branch_coverage=1 01:14:48.281 --rc genhtml_function_coverage=1 01:14:48.281 --rc genhtml_legend=1 01:14:48.281 --rc geninfo_all_blocks=1 01:14:48.281 --rc geninfo_unexecuted_blocks=1 01:14:48.281 01:14:48.281 ' 01:14:48.281 05:09:30 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:14:48.281 05:09:30 json_config -- nvmf/common.sh@7 -- # uname -s 01:14:48.281 05:09:30 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:14:48.281 05:09:30 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:14:48.281 05:09:30 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:14:48.281 05:09:30 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:14:48.281 05:09:30 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:14:48.281 05:09:30 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:14:48.281 05:09:30 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:14:48.281 05:09:30 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:14:48.281 05:09:30 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:14:48.281 05:09:30 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:14:48.281 05:09:30 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:14:48.281 05:09:30 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:14:48.281 05:09:30 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:14:48.281 05:09:30 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:14:48.281 05:09:30 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 01:14:48.281 05:09:30 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:14:48.281 05:09:30 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:14:48.281 05:09:30 json_config -- scripts/common.sh@15 -- # shopt -s extglob 01:14:48.281 05:09:30 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:14:48.281 05:09:30 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:14:48.281 05:09:30 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:14:48.281 05:09:30 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:14:48.281 05:09:30 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:14:48.281 05:09:30 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:14:48.281 05:09:30 json_config -- paths/export.sh@5 -- # export PATH 01:14:48.281 05:09:30 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:14:48.281 05:09:30 json_config -- nvmf/common.sh@51 -- # : 0 01:14:48.281 05:09:30 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:14:48.281 05:09:30 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:14:48.281 05:09:30 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:14:48.281 05:09:30 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:14:48.281 05:09:30 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:14:48.281 05:09:30 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:14:48.281 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:14:48.281 05:09:30 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:14:48.281 05:09:30 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:14:48.281 05:09:30 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 01:14:48.281 05:09:30 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 01:14:48.281 05:09:30 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 01:14:48.281 05:09:30 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 01:14:48.281 05:09:30 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 01:14:48.281 05:09:30 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 01:14:48.281 05:09:30 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 01:14:48.281 05:09:30 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 01:14:48.281 05:09:30 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 01:14:48.281 05:09:30 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 01:14:48.281 05:09:30 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 01:14:48.281 05:09:30 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 01:14:48.281 05:09:30 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 01:14:48.281 05:09:30 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 01:14:48.281 05:09:30 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 01:14:48.281 05:09:30 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 01:14:48.281 INFO: JSON configuration test init 01:14:48.281 05:09:30 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 01:14:48.281 05:09:30 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 01:14:48.281 05:09:30 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 01:14:48.281 05:09:30 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 01:14:48.281 05:09:30 json_config -- common/autotest_common.sh@10 -- # set +x 01:14:48.281 05:09:30 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 01:14:48.281 05:09:30 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 01:14:48.281 05:09:30 json_config -- common/autotest_common.sh@10 -- # set +x 01:14:48.281 05:09:30 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 01:14:48.281 05:09:30 json_config -- json_config/common.sh@9 -- # local app=target 01:14:48.281 05:09:30 json_config -- json_config/common.sh@10 -- # shift 01:14:48.281 05:09:30 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 01:14:48.281 05:09:30 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 01:14:48.281 05:09:30 json_config -- json_config/common.sh@15 -- # local app_extra_params= 01:14:48.282 05:09:30 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 01:14:48.282 05:09:30 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 01:14:48.282 05:09:30 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57460 01:14:48.282 Waiting for target to run... 01:14:48.282 05:09:30 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 01:14:48.282 05:09:30 json_config -- json_config/common.sh@25 -- # waitforlisten 57460 /var/tmp/spdk_tgt.sock 01:14:48.282 05:09:30 json_config -- common/autotest_common.sh@835 -- # '[' -z 57460 ']' 01:14:48.282 05:09:30 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 01:14:48.282 05:09:30 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 01:14:48.282 05:09:30 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 01:14:48.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 01:14:48.282 05:09:30 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 01:14:48.282 05:09:30 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 01:14:48.282 05:09:30 json_config -- common/autotest_common.sh@10 -- # set +x 01:14:48.282 [2024-12-09 05:09:30.625552] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:14:48.282 [2024-12-09 05:09:30.625630] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57460 ] 01:14:48.541 [2024-12-09 05:09:30.975699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:14:48.801 [2024-12-09 05:09:31.019155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:14:49.061 05:09:31 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:14:49.061 05:09:31 json_config -- common/autotest_common.sh@868 -- # return 0 01:14:49.061 05:09:31 json_config -- json_config/common.sh@26 -- # echo '' 01:14:49.061 01:14:49.061 05:09:31 json_config -- json_config/json_config.sh@276 -- # create_accel_config 01:14:49.061 05:09:31 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 01:14:49.061 05:09:31 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 01:14:49.061 05:09:31 json_config -- common/autotest_common.sh@10 -- # set +x 01:14:49.061 05:09:31 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 01:14:49.061 05:09:31 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 01:14:49.061 05:09:31 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 01:14:49.061 05:09:31 json_config -- common/autotest_common.sh@10 -- # set +x 01:14:49.320 05:09:31 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 01:14:49.320 05:09:31 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 01:14:49.320 05:09:31 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 01:14:49.581 [2024-12-09 05:09:31.775039] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:14:49.581 05:09:31 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 01:14:49.581 05:09:31 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 01:14:49.581 05:09:31 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 01:14:49.581 05:09:31 json_config -- common/autotest_common.sh@10 -- # set +x 01:14:49.581 05:09:31 json_config -- json_config/json_config.sh@45 -- # local ret=0 01:14:49.581 05:09:31 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 01:14:49.581 05:09:31 json_config -- json_config/json_config.sh@46 -- # local enabled_types 01:14:49.581 05:09:31 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 01:14:49.581 05:09:31 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 01:14:49.581 05:09:31 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 01:14:49.581 05:09:31 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 01:14:49.581 05:09:31 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 01:14:49.851 05:09:32 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 01:14:49.851 05:09:32 json_config -- json_config/json_config.sh@51 -- # local get_types 01:14:49.851 05:09:32 json_config -- json_config/json_config.sh@53 -- # local type_diff 01:14:49.851 05:09:32 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 01:14:49.851 05:09:32 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 01:14:49.851 05:09:32 json_config -- json_config/json_config.sh@54 -- # sort 01:14:49.851 05:09:32 json_config -- json_config/json_config.sh@54 -- # uniq -u 01:14:49.851 05:09:32 json_config -- json_config/json_config.sh@54 -- # type_diff= 01:14:49.851 05:09:32 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 01:14:49.851 05:09:32 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 01:14:49.851 05:09:32 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 01:14:49.851 05:09:32 json_config -- common/autotest_common.sh@10 -- # set +x 01:14:49.851 05:09:32 json_config -- json_config/json_config.sh@62 -- # return 0 01:14:49.851 05:09:32 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 01:14:49.851 05:09:32 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 01:14:49.851 05:09:32 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 01:14:49.851 05:09:32 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 01:14:49.851 05:09:32 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 01:14:49.851 05:09:32 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 01:14:49.851 05:09:32 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 01:14:49.851 05:09:32 json_config -- common/autotest_common.sh@10 -- # set +x 01:14:49.851 05:09:32 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 01:14:49.851 05:09:32 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 01:14:49.851 05:09:32 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 01:14:49.851 05:09:32 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 01:14:49.851 05:09:32 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 01:14:50.128 MallocForNvmf0 01:14:50.128 05:09:32 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 01:14:50.128 05:09:32 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 01:14:50.387 MallocForNvmf1 01:14:50.387 05:09:32 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 01:14:50.387 05:09:32 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 01:14:50.646 [2024-12-09 05:09:32.884526] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:14:50.646 05:09:32 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:14:50.646 05:09:32 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:14:50.646 05:09:33 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 01:14:50.646 05:09:33 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 01:14:50.905 05:09:33 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 01:14:50.905 05:09:33 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 01:14:51.165 05:09:33 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 01:14:51.165 05:09:33 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 01:14:51.424 [2024-12-09 05:09:33.667426] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 01:14:51.424 05:09:33 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 01:14:51.424 05:09:33 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 01:14:51.424 05:09:33 json_config -- common/autotest_common.sh@10 -- # set +x 01:14:51.424 05:09:33 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 01:14:51.424 05:09:33 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 01:14:51.424 05:09:33 json_config -- common/autotest_common.sh@10 -- # set +x 01:14:51.424 05:09:33 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 01:14:51.424 05:09:33 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 01:14:51.424 05:09:33 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 01:14:51.684 MallocBdevForConfigChangeCheck 01:14:51.684 05:09:33 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 01:14:51.684 05:09:33 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 01:14:51.684 05:09:33 json_config -- common/autotest_common.sh@10 -- # set +x 01:14:51.684 05:09:34 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 01:14:51.684 05:09:34 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 01:14:51.944 INFO: shutting down applications... 01:14:51.944 05:09:34 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 01:14:51.944 05:09:34 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 01:14:51.944 05:09:34 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 01:14:51.944 05:09:34 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 01:14:51.944 05:09:34 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 01:14:52.513 Calling clear_iscsi_subsystem 01:14:52.513 Calling clear_nvmf_subsystem 01:14:52.513 Calling clear_nbd_subsystem 01:14:52.513 Calling clear_ublk_subsystem 01:14:52.513 Calling clear_vhost_blk_subsystem 01:14:52.513 Calling clear_vhost_scsi_subsystem 01:14:52.513 Calling clear_bdev_subsystem 01:14:52.513 05:09:34 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 01:14:52.513 05:09:34 json_config -- json_config/json_config.sh@350 -- # count=100 01:14:52.513 05:09:34 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 01:14:52.513 05:09:34 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 01:14:52.513 05:09:34 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 01:14:52.513 05:09:34 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 01:14:52.771 05:09:35 json_config -- json_config/json_config.sh@352 -- # break 01:14:52.771 05:09:35 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 01:14:52.771 05:09:35 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 01:14:52.771 05:09:35 json_config -- json_config/common.sh@31 -- # local app=target 01:14:52.771 05:09:35 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 01:14:52.771 05:09:35 json_config -- json_config/common.sh@35 -- # [[ -n 57460 ]] 01:14:52.771 05:09:35 json_config -- json_config/common.sh@38 -- # kill -SIGINT 57460 01:14:52.771 05:09:35 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 01:14:52.771 05:09:35 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 01:14:52.771 05:09:35 json_config -- json_config/common.sh@41 -- # kill -0 57460 01:14:52.771 05:09:35 json_config -- json_config/common.sh@45 -- # sleep 0.5 01:14:53.337 05:09:35 json_config -- json_config/common.sh@40 -- # (( i++ )) 01:14:53.338 05:09:35 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 01:14:53.338 05:09:35 json_config -- json_config/common.sh@41 -- # kill -0 57460 01:14:53.338 05:09:35 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 01:14:53.338 05:09:35 json_config -- json_config/common.sh@43 -- # break 01:14:53.338 05:09:35 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 01:14:53.338 SPDK target shutdown done 01:14:53.338 05:09:35 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 01:14:53.338 INFO: relaunching applications... 01:14:53.338 05:09:35 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 01:14:53.338 05:09:35 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 01:14:53.338 05:09:35 json_config -- json_config/common.sh@9 -- # local app=target 01:14:53.338 05:09:35 json_config -- json_config/common.sh@10 -- # shift 01:14:53.338 05:09:35 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 01:14:53.338 05:09:35 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 01:14:53.338 05:09:35 json_config -- json_config/common.sh@15 -- # local app_extra_params= 01:14:53.338 05:09:35 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 01:14:53.338 05:09:35 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 01:14:53.338 05:09:35 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57645 01:14:53.338 Waiting for target to run... 01:14:53.338 05:09:35 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 01:14:53.338 05:09:35 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 01:14:53.338 05:09:35 json_config -- json_config/common.sh@25 -- # waitforlisten 57645 /var/tmp/spdk_tgt.sock 01:14:53.338 05:09:35 json_config -- common/autotest_common.sh@835 -- # '[' -z 57645 ']' 01:14:53.338 05:09:35 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 01:14:53.338 05:09:35 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 01:14:53.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 01:14:53.338 05:09:35 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 01:14:53.338 05:09:35 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 01:14:53.338 05:09:35 json_config -- common/autotest_common.sh@10 -- # set +x 01:14:53.338 [2024-12-09 05:09:35.646393] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:14:53.338 [2024-12-09 05:09:35.646463] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57645 ] 01:14:53.597 [2024-12-09 05:09:35.997171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:14:53.597 [2024-12-09 05:09:36.040842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:14:53.856 [2024-12-09 05:09:36.175108] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:14:54.114 [2024-12-09 05:09:36.382584] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:14:54.114 [2024-12-09 05:09:36.414568] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 01:14:54.114 05:09:36 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:14:54.114 05:09:36 json_config -- common/autotest_common.sh@868 -- # return 0 01:14:54.114 01:14:54.114 05:09:36 json_config -- json_config/common.sh@26 -- # echo '' 01:14:54.114 05:09:36 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 01:14:54.114 INFO: Checking if target configuration is the same... 01:14:54.114 05:09:36 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 01:14:54.114 05:09:36 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 01:14:54.114 05:09:36 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 01:14:54.114 05:09:36 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 01:14:54.114 + '[' 2 -ne 2 ']' 01:14:54.114 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 01:14:54.114 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 01:14:54.114 + rootdir=/home/vagrant/spdk_repo/spdk 01:14:54.114 +++ basename /dev/fd/62 01:14:54.114 ++ mktemp /tmp/62.XXX 01:14:54.114 + tmp_file_1=/tmp/62.U6K 01:14:54.114 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 01:14:54.114 ++ mktemp /tmp/spdk_tgt_config.json.XXX 01:14:54.114 + tmp_file_2=/tmp/spdk_tgt_config.json.eFH 01:14:54.114 + ret=0 01:14:54.114 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 01:14:54.681 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 01:14:54.681 + diff -u /tmp/62.U6K /tmp/spdk_tgt_config.json.eFH 01:14:54.681 INFO: JSON config files are the same 01:14:54.681 + echo 'INFO: JSON config files are the same' 01:14:54.681 + rm /tmp/62.U6K /tmp/spdk_tgt_config.json.eFH 01:14:54.681 + exit 0 01:14:54.681 05:09:36 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 01:14:54.681 INFO: changing configuration and checking if this can be detected... 01:14:54.681 05:09:36 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 01:14:54.681 05:09:36 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 01:14:54.681 05:09:36 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 01:14:54.940 05:09:37 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 01:14:54.940 05:09:37 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 01:14:54.940 05:09:37 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 01:14:54.940 + '[' 2 -ne 2 ']' 01:14:54.940 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 01:14:54.940 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 01:14:54.940 + rootdir=/home/vagrant/spdk_repo/spdk 01:14:54.940 +++ basename /dev/fd/62 01:14:54.940 ++ mktemp /tmp/62.XXX 01:14:54.940 + tmp_file_1=/tmp/62.xl3 01:14:54.940 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 01:14:54.940 ++ mktemp /tmp/spdk_tgt_config.json.XXX 01:14:54.940 + tmp_file_2=/tmp/spdk_tgt_config.json.4dm 01:14:54.940 + ret=0 01:14:54.940 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 01:14:55.198 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 01:14:55.198 + diff -u /tmp/62.xl3 /tmp/spdk_tgt_config.json.4dm 01:14:55.198 + ret=1 01:14:55.199 + echo '=== Start of file: /tmp/62.xl3 ===' 01:14:55.199 + cat /tmp/62.xl3 01:14:55.199 + echo '=== End of file: /tmp/62.xl3 ===' 01:14:55.199 + echo '' 01:14:55.199 + echo '=== Start of file: /tmp/spdk_tgt_config.json.4dm ===' 01:14:55.199 + cat /tmp/spdk_tgt_config.json.4dm 01:14:55.199 + echo '=== End of file: /tmp/spdk_tgt_config.json.4dm ===' 01:14:55.199 + echo '' 01:14:55.199 + rm /tmp/62.xl3 /tmp/spdk_tgt_config.json.4dm 01:14:55.199 + exit 1 01:14:55.199 INFO: configuration change detected. 01:14:55.199 05:09:37 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 01:14:55.199 05:09:37 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 01:14:55.199 05:09:37 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 01:14:55.199 05:09:37 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 01:14:55.199 05:09:37 json_config -- common/autotest_common.sh@10 -- # set +x 01:14:55.199 05:09:37 json_config -- json_config/json_config.sh@314 -- # local ret=0 01:14:55.199 05:09:37 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 01:14:55.199 05:09:37 json_config -- json_config/json_config.sh@324 -- # [[ -n 57645 ]] 01:14:55.199 05:09:37 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 01:14:55.199 05:09:37 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 01:14:55.199 05:09:37 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 01:14:55.199 05:09:37 json_config -- common/autotest_common.sh@10 -- # set +x 01:14:55.199 05:09:37 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 01:14:55.199 05:09:37 json_config -- json_config/json_config.sh@200 -- # uname -s 01:14:55.199 05:09:37 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 01:14:55.199 05:09:37 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 01:14:55.199 05:09:37 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 01:14:55.199 05:09:37 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 01:14:55.199 05:09:37 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 01:14:55.199 05:09:37 json_config -- common/autotest_common.sh@10 -- # set +x 01:14:55.456 05:09:37 json_config -- json_config/json_config.sh@330 -- # killprocess 57645 01:14:55.456 05:09:37 json_config -- common/autotest_common.sh@954 -- # '[' -z 57645 ']' 01:14:55.456 05:09:37 json_config -- common/autotest_common.sh@958 -- # kill -0 57645 01:14:55.456 05:09:37 json_config -- common/autotest_common.sh@959 -- # uname 01:14:55.456 05:09:37 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:14:55.456 05:09:37 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57645 01:14:55.456 05:09:37 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:14:55.456 05:09:37 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:14:55.456 killing process with pid 57645 01:14:55.456 05:09:37 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57645' 01:14:55.456 05:09:37 json_config -- common/autotest_common.sh@973 -- # kill 57645 01:14:55.456 05:09:37 json_config -- common/autotest_common.sh@978 -- # wait 57645 01:14:55.715 05:09:37 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 01:14:55.715 05:09:37 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 01:14:55.715 05:09:37 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 01:14:55.715 05:09:37 json_config -- common/autotest_common.sh@10 -- # set +x 01:14:55.715 05:09:38 json_config -- json_config/json_config.sh@335 -- # return 0 01:14:55.715 05:09:38 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 01:14:55.715 INFO: Success 01:14:55.715 01:14:55.715 real 0m7.676s 01:14:55.715 user 0m10.549s 01:14:55.715 sys 0m1.736s 01:14:55.715 05:09:38 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 01:14:55.715 05:09:38 json_config -- common/autotest_common.sh@10 -- # set +x 01:14:55.715 ************************************ 01:14:55.715 END TEST json_config 01:14:55.715 ************************************ 01:14:55.715 05:09:38 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 01:14:55.715 05:09:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:14:55.715 05:09:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:14:55.715 05:09:38 -- common/autotest_common.sh@10 -- # set +x 01:14:55.715 ************************************ 01:14:55.715 START TEST json_config_extra_key 01:14:55.715 ************************************ 01:14:55.715 05:09:38 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 01:14:55.975 05:09:38 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:14:55.975 05:09:38 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 01:14:55.975 05:09:38 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:14:55.975 05:09:38 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:14:55.975 05:09:38 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:14:55.975 05:09:38 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 01:14:55.975 05:09:38 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 01:14:55.975 05:09:38 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 01:14:55.975 05:09:38 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 01:14:55.975 05:09:38 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 01:14:55.975 05:09:38 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 01:14:55.975 05:09:38 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 01:14:55.975 05:09:38 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 01:14:55.975 05:09:38 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 01:14:55.975 05:09:38 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:14:55.975 05:09:38 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 01:14:55.975 05:09:38 json_config_extra_key -- scripts/common.sh@345 -- # : 1 01:14:55.975 05:09:38 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 01:14:55.975 05:09:38 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:14:55.975 05:09:38 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 01:14:55.975 05:09:38 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 01:14:55.975 05:09:38 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:14:55.975 05:09:38 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 01:14:55.975 05:09:38 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 01:14:55.975 05:09:38 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 01:14:55.975 05:09:38 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 01:14:55.975 05:09:38 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:14:55.975 05:09:38 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 01:14:55.975 05:09:38 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 01:14:55.975 05:09:38 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:14:55.975 05:09:38 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:14:55.975 05:09:38 json_config_extra_key -- scripts/common.sh@368 -- # return 0 01:14:55.975 05:09:38 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:14:55.975 05:09:38 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:14:55.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:14:55.975 --rc genhtml_branch_coverage=1 01:14:55.975 --rc genhtml_function_coverage=1 01:14:55.975 --rc genhtml_legend=1 01:14:55.975 --rc geninfo_all_blocks=1 01:14:55.975 --rc geninfo_unexecuted_blocks=1 01:14:55.975 01:14:55.975 ' 01:14:55.975 05:09:38 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:14:55.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:14:55.975 --rc genhtml_branch_coverage=1 01:14:55.975 --rc genhtml_function_coverage=1 01:14:55.975 --rc genhtml_legend=1 01:14:55.975 --rc geninfo_all_blocks=1 01:14:55.975 --rc geninfo_unexecuted_blocks=1 01:14:55.975 01:14:55.975 ' 01:14:55.975 05:09:38 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:14:55.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:14:55.975 --rc genhtml_branch_coverage=1 01:14:55.975 --rc genhtml_function_coverage=1 01:14:55.975 --rc genhtml_legend=1 01:14:55.975 --rc geninfo_all_blocks=1 01:14:55.975 --rc geninfo_unexecuted_blocks=1 01:14:55.975 01:14:55.975 ' 01:14:55.975 05:09:38 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:14:55.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:14:55.975 --rc genhtml_branch_coverage=1 01:14:55.975 --rc genhtml_function_coverage=1 01:14:55.975 --rc genhtml_legend=1 01:14:55.975 --rc geninfo_all_blocks=1 01:14:55.975 --rc geninfo_unexecuted_blocks=1 01:14:55.975 01:14:55.975 ' 01:14:55.975 05:09:38 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:14:55.975 05:09:38 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 01:14:55.975 05:09:38 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:14:55.975 05:09:38 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:14:55.975 05:09:38 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:14:55.975 05:09:38 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:14:55.975 05:09:38 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:14:55.976 05:09:38 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:14:55.976 05:09:38 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:14:55.976 05:09:38 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:14:55.976 05:09:38 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:14:55.976 05:09:38 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:14:55.976 05:09:38 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:14:55.976 05:09:38 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:14:55.976 05:09:38 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:14:55.976 05:09:38 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:14:55.976 05:09:38 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 01:14:55.976 05:09:38 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:14:55.976 05:09:38 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:14:55.976 05:09:38 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 01:14:55.976 05:09:38 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:14:55.976 05:09:38 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:14:55.976 05:09:38 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:14:55.976 05:09:38 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:14:55.976 05:09:38 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:14:55.976 05:09:38 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:14:55.976 05:09:38 json_config_extra_key -- paths/export.sh@5 -- # export PATH 01:14:55.976 05:09:38 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:14:55.976 05:09:38 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 01:14:55.976 05:09:38 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:14:55.976 05:09:38 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:14:55.976 05:09:38 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:14:55.976 05:09:38 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:14:55.976 05:09:38 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:14:55.976 05:09:38 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:14:55.976 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:14:55.976 05:09:38 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:14:55.976 05:09:38 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:14:55.976 05:09:38 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 01:14:55.976 05:09:38 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 01:14:55.976 05:09:38 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 01:14:55.976 05:09:38 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 01:14:55.976 05:09:38 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 01:14:55.976 05:09:38 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 01:14:55.976 05:09:38 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 01:14:55.976 05:09:38 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 01:14:55.976 05:09:38 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 01:14:55.976 05:09:38 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 01:14:55.976 05:09:38 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 01:14:55.976 INFO: launching applications... 01:14:55.976 05:09:38 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 01:14:55.976 05:09:38 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 01:14:55.976 05:09:38 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 01:14:55.976 05:09:38 json_config_extra_key -- json_config/common.sh@10 -- # shift 01:14:55.976 05:09:38 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 01:14:55.976 05:09:38 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 01:14:55.976 05:09:38 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 01:14:55.976 05:09:38 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 01:14:55.976 05:09:38 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 01:14:55.976 05:09:38 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57794 01:14:55.976 Waiting for target to run... 01:14:55.976 05:09:38 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 01:14:55.976 05:09:38 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57794 /var/tmp/spdk_tgt.sock 01:14:55.976 05:09:38 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57794 ']' 01:14:55.976 05:09:38 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 01:14:55.976 05:09:38 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 01:14:55.976 05:09:38 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 01:14:55.976 05:09:38 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 01:14:55.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 01:14:55.976 05:09:38 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 01:14:55.976 05:09:38 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 01:14:55.976 [2024-12-09 05:09:38.364293] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:14:55.976 [2024-12-09 05:09:38.364396] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57794 ] 01:14:56.542 [2024-12-09 05:09:38.903689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:14:56.543 [2024-12-09 05:09:38.948657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:14:56.543 [2024-12-09 05:09:38.977817] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:14:56.801 05:09:39 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:14:56.801 05:09:39 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 01:14:56.801 01:14:56.801 05:09:39 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 01:14:56.802 INFO: shutting down applications... 01:14:56.802 05:09:39 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 01:14:56.802 05:09:39 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 01:14:56.802 05:09:39 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 01:14:56.802 05:09:39 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 01:14:56.802 05:09:39 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57794 ]] 01:14:56.802 05:09:39 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57794 01:14:56.802 05:09:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 01:14:56.802 05:09:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 01:14:56.802 05:09:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57794 01:14:56.802 05:09:39 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 01:14:57.370 05:09:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 01:14:57.370 05:09:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 01:14:57.370 05:09:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57794 01:14:57.370 05:09:39 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 01:14:57.370 05:09:39 json_config_extra_key -- json_config/common.sh@43 -- # break 01:14:57.370 05:09:39 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 01:14:57.370 SPDK target shutdown done 01:14:57.370 05:09:39 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 01:14:57.370 Success 01:14:57.370 05:09:39 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 01:14:57.370 01:14:57.370 real 0m1.650s 01:14:57.370 user 0m1.240s 01:14:57.370 sys 0m0.581s 01:14:57.370 05:09:39 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 01:14:57.370 05:09:39 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 01:14:57.370 ************************************ 01:14:57.370 END TEST json_config_extra_key 01:14:57.370 ************************************ 01:14:57.370 05:09:39 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 01:14:57.370 05:09:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:14:57.370 05:09:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:14:57.370 05:09:39 -- common/autotest_common.sh@10 -- # set +x 01:14:57.370 ************************************ 01:14:57.370 START TEST alias_rpc 01:14:57.370 ************************************ 01:14:57.370 05:09:39 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 01:14:57.630 * Looking for test storage... 01:14:57.630 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 01:14:57.630 05:09:39 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:14:57.630 05:09:39 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 01:14:57.630 05:09:39 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:14:57.630 05:09:39 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:14:57.630 05:09:39 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:14:57.630 05:09:39 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 01:14:57.630 05:09:39 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 01:14:57.630 05:09:39 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 01:14:57.630 05:09:39 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 01:14:57.630 05:09:39 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 01:14:57.630 05:09:39 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 01:14:57.630 05:09:39 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 01:14:57.630 05:09:39 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 01:14:57.630 05:09:39 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 01:14:57.630 05:09:39 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:14:57.630 05:09:39 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 01:14:57.630 05:09:39 alias_rpc -- scripts/common.sh@345 -- # : 1 01:14:57.630 05:09:39 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 01:14:57.630 05:09:39 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:14:57.630 05:09:39 alias_rpc -- scripts/common.sh@365 -- # decimal 1 01:14:57.630 05:09:39 alias_rpc -- scripts/common.sh@353 -- # local d=1 01:14:57.630 05:09:39 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:14:57.630 05:09:39 alias_rpc -- scripts/common.sh@355 -- # echo 1 01:14:57.630 05:09:39 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 01:14:57.630 05:09:39 alias_rpc -- scripts/common.sh@366 -- # decimal 2 01:14:57.630 05:09:39 alias_rpc -- scripts/common.sh@353 -- # local d=2 01:14:57.630 05:09:39 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:14:57.630 05:09:39 alias_rpc -- scripts/common.sh@355 -- # echo 2 01:14:57.630 05:09:39 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 01:14:57.630 05:09:40 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:14:57.630 05:09:40 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:14:57.630 05:09:40 alias_rpc -- scripts/common.sh@368 -- # return 0 01:14:57.630 05:09:40 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:14:57.630 05:09:40 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:14:57.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:14:57.630 --rc genhtml_branch_coverage=1 01:14:57.630 --rc genhtml_function_coverage=1 01:14:57.630 --rc genhtml_legend=1 01:14:57.630 --rc geninfo_all_blocks=1 01:14:57.630 --rc geninfo_unexecuted_blocks=1 01:14:57.630 01:14:57.630 ' 01:14:57.630 05:09:40 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:14:57.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:14:57.630 --rc genhtml_branch_coverage=1 01:14:57.630 --rc genhtml_function_coverage=1 01:14:57.630 --rc genhtml_legend=1 01:14:57.630 --rc geninfo_all_blocks=1 01:14:57.630 --rc geninfo_unexecuted_blocks=1 01:14:57.630 01:14:57.630 ' 01:14:57.630 05:09:40 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:14:57.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:14:57.630 --rc genhtml_branch_coverage=1 01:14:57.630 --rc genhtml_function_coverage=1 01:14:57.630 --rc genhtml_legend=1 01:14:57.630 --rc geninfo_all_blocks=1 01:14:57.630 --rc geninfo_unexecuted_blocks=1 01:14:57.630 01:14:57.630 ' 01:14:57.630 05:09:40 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:14:57.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:14:57.630 --rc genhtml_branch_coverage=1 01:14:57.630 --rc genhtml_function_coverage=1 01:14:57.630 --rc genhtml_legend=1 01:14:57.630 --rc geninfo_all_blocks=1 01:14:57.630 --rc geninfo_unexecuted_blocks=1 01:14:57.630 01:14:57.630 ' 01:14:57.631 05:09:40 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 01:14:57.631 05:09:40 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57872 01:14:57.631 05:09:40 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:14:57.631 05:09:40 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57872 01:14:57.631 05:09:40 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57872 ']' 01:14:57.631 05:09:40 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:14:57.631 05:09:40 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 01:14:57.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:14:57.631 05:09:40 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:14:57.631 05:09:40 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 01:14:57.631 05:09:40 alias_rpc -- common/autotest_common.sh@10 -- # set +x 01:14:57.631 [2024-12-09 05:09:40.059544] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:14:57.631 [2024-12-09 05:09:40.059609] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57872 ] 01:14:57.890 [2024-12-09 05:09:40.212146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:14:57.890 [2024-12-09 05:09:40.262440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:14:57.890 [2024-12-09 05:09:40.317407] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:14:58.472 05:09:40 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:14:58.472 05:09:40 alias_rpc -- common/autotest_common.sh@868 -- # return 0 01:14:58.472 05:09:40 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 01:14:58.745 05:09:41 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57872 01:14:58.745 05:09:41 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57872 ']' 01:14:58.745 05:09:41 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57872 01:14:58.745 05:09:41 alias_rpc -- common/autotest_common.sh@959 -- # uname 01:14:58.745 05:09:41 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:14:58.745 05:09:41 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57872 01:14:58.745 05:09:41 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:14:58.745 05:09:41 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:14:58.745 killing process with pid 57872 01:14:58.745 05:09:41 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57872' 01:14:58.745 05:09:41 alias_rpc -- common/autotest_common.sh@973 -- # kill 57872 01:14:58.745 05:09:41 alias_rpc -- common/autotest_common.sh@978 -- # wait 57872 01:14:59.314 01:14:59.314 real 0m1.733s 01:14:59.314 user 0m1.871s 01:14:59.314 sys 0m0.400s 01:14:59.314 05:09:41 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 01:14:59.314 05:09:41 alias_rpc -- common/autotest_common.sh@10 -- # set +x 01:14:59.314 ************************************ 01:14:59.314 END TEST alias_rpc 01:14:59.314 ************************************ 01:14:59.314 05:09:41 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 01:14:59.314 05:09:41 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 01:14:59.314 05:09:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:14:59.314 05:09:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:14:59.314 05:09:41 -- common/autotest_common.sh@10 -- # set +x 01:14:59.314 ************************************ 01:14:59.314 START TEST spdkcli_tcp 01:14:59.314 ************************************ 01:14:59.314 05:09:41 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 01:14:59.314 * Looking for test storage... 01:14:59.314 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 01:14:59.314 05:09:41 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:14:59.314 05:09:41 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 01:14:59.314 05:09:41 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:14:59.574 05:09:41 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:14:59.574 05:09:41 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:14:59.574 05:09:41 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 01:14:59.574 05:09:41 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 01:14:59.574 05:09:41 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 01:14:59.574 05:09:41 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 01:14:59.574 05:09:41 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 01:14:59.574 05:09:41 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 01:14:59.574 05:09:41 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 01:14:59.574 05:09:41 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 01:14:59.574 05:09:41 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 01:14:59.574 05:09:41 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:14:59.574 05:09:41 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 01:14:59.574 05:09:41 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 01:14:59.574 05:09:41 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 01:14:59.574 05:09:41 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:14:59.574 05:09:41 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 01:14:59.574 05:09:41 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 01:14:59.574 05:09:41 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:14:59.574 05:09:41 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 01:14:59.574 05:09:41 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 01:14:59.574 05:09:41 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 01:14:59.574 05:09:41 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 01:14:59.574 05:09:41 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:14:59.574 05:09:41 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 01:14:59.574 05:09:41 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 01:14:59.574 05:09:41 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:14:59.574 05:09:41 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:14:59.574 05:09:41 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 01:14:59.574 05:09:41 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:14:59.574 05:09:41 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:14:59.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:14:59.574 --rc genhtml_branch_coverage=1 01:14:59.574 --rc genhtml_function_coverage=1 01:14:59.574 --rc genhtml_legend=1 01:14:59.574 --rc geninfo_all_blocks=1 01:14:59.574 --rc geninfo_unexecuted_blocks=1 01:14:59.574 01:14:59.574 ' 01:14:59.574 05:09:41 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:14:59.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:14:59.574 --rc genhtml_branch_coverage=1 01:14:59.574 --rc genhtml_function_coverage=1 01:14:59.574 --rc genhtml_legend=1 01:14:59.574 --rc geninfo_all_blocks=1 01:14:59.574 --rc geninfo_unexecuted_blocks=1 01:14:59.574 01:14:59.574 ' 01:14:59.574 05:09:41 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:14:59.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:14:59.574 --rc genhtml_branch_coverage=1 01:14:59.574 --rc genhtml_function_coverage=1 01:14:59.574 --rc genhtml_legend=1 01:14:59.574 --rc geninfo_all_blocks=1 01:14:59.574 --rc geninfo_unexecuted_blocks=1 01:14:59.574 01:14:59.574 ' 01:14:59.574 05:09:41 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:14:59.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:14:59.574 --rc genhtml_branch_coverage=1 01:14:59.574 --rc genhtml_function_coverage=1 01:14:59.574 --rc genhtml_legend=1 01:14:59.574 --rc geninfo_all_blocks=1 01:14:59.574 --rc geninfo_unexecuted_blocks=1 01:14:59.574 01:14:59.574 ' 01:14:59.574 05:09:41 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 01:14:59.574 05:09:41 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 01:14:59.574 05:09:41 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 01:14:59.574 05:09:41 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 01:14:59.574 05:09:41 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 01:14:59.574 05:09:41 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 01:14:59.574 05:09:41 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 01:14:59.574 05:09:41 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 01:14:59.574 05:09:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 01:14:59.574 05:09:41 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57945 01:14:59.574 05:09:41 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 01:14:59.574 05:09:41 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57945 01:14:59.574 05:09:41 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57945 ']' 01:14:59.574 05:09:41 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:14:59.574 05:09:41 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 01:14:59.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:14:59.574 05:09:41 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:14:59.574 05:09:41 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 01:14:59.574 05:09:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 01:14:59.574 [2024-12-09 05:09:41.884088] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:14:59.574 [2024-12-09 05:09:41.884164] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57945 ] 01:14:59.835 [2024-12-09 05:09:42.036270] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 01:14:59.835 [2024-12-09 05:09:42.089237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:14:59.835 [2024-12-09 05:09:42.089241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:14:59.835 [2024-12-09 05:09:42.145113] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:15:00.404 05:09:42 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:15:00.404 05:09:42 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 01:15:00.404 05:09:42 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 01:15:00.404 05:09:42 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57962 01:15:00.404 05:09:42 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 01:15:00.664 [ 01:15:00.664 "bdev_malloc_delete", 01:15:00.664 "bdev_malloc_create", 01:15:00.664 "bdev_null_resize", 01:15:00.664 "bdev_null_delete", 01:15:00.664 "bdev_null_create", 01:15:00.664 "bdev_nvme_cuse_unregister", 01:15:00.664 "bdev_nvme_cuse_register", 01:15:00.664 "bdev_opal_new_user", 01:15:00.664 "bdev_opal_set_lock_state", 01:15:00.664 "bdev_opal_delete", 01:15:00.664 "bdev_opal_get_info", 01:15:00.664 "bdev_opal_create", 01:15:00.664 "bdev_nvme_opal_revert", 01:15:00.664 "bdev_nvme_opal_init", 01:15:00.664 "bdev_nvme_send_cmd", 01:15:00.664 "bdev_nvme_set_keys", 01:15:00.664 "bdev_nvme_get_path_iostat", 01:15:00.664 "bdev_nvme_get_mdns_discovery_info", 01:15:00.664 "bdev_nvme_stop_mdns_discovery", 01:15:00.664 "bdev_nvme_start_mdns_discovery", 01:15:00.664 "bdev_nvme_set_multipath_policy", 01:15:00.664 "bdev_nvme_set_preferred_path", 01:15:00.664 "bdev_nvme_get_io_paths", 01:15:00.664 "bdev_nvme_remove_error_injection", 01:15:00.664 "bdev_nvme_add_error_injection", 01:15:00.664 "bdev_nvme_get_discovery_info", 01:15:00.664 "bdev_nvme_stop_discovery", 01:15:00.664 "bdev_nvme_start_discovery", 01:15:00.664 "bdev_nvme_get_controller_health_info", 01:15:00.664 "bdev_nvme_disable_controller", 01:15:00.664 "bdev_nvme_enable_controller", 01:15:00.664 "bdev_nvme_reset_controller", 01:15:00.664 "bdev_nvme_get_transport_statistics", 01:15:00.664 "bdev_nvme_apply_firmware", 01:15:00.664 "bdev_nvme_detach_controller", 01:15:00.664 "bdev_nvme_get_controllers", 01:15:00.664 "bdev_nvme_attach_controller", 01:15:00.664 "bdev_nvme_set_hotplug", 01:15:00.664 "bdev_nvme_set_options", 01:15:00.664 "bdev_passthru_delete", 01:15:00.664 "bdev_passthru_create", 01:15:00.664 "bdev_lvol_set_parent_bdev", 01:15:00.664 "bdev_lvol_set_parent", 01:15:00.664 "bdev_lvol_check_shallow_copy", 01:15:00.664 "bdev_lvol_start_shallow_copy", 01:15:00.664 "bdev_lvol_grow_lvstore", 01:15:00.664 "bdev_lvol_get_lvols", 01:15:00.664 "bdev_lvol_get_lvstores", 01:15:00.664 "bdev_lvol_delete", 01:15:00.664 "bdev_lvol_set_read_only", 01:15:00.664 "bdev_lvol_resize", 01:15:00.664 "bdev_lvol_decouple_parent", 01:15:00.664 "bdev_lvol_inflate", 01:15:00.664 "bdev_lvol_rename", 01:15:00.664 "bdev_lvol_clone_bdev", 01:15:00.664 "bdev_lvol_clone", 01:15:00.664 "bdev_lvol_snapshot", 01:15:00.664 "bdev_lvol_create", 01:15:00.664 "bdev_lvol_delete_lvstore", 01:15:00.664 "bdev_lvol_rename_lvstore", 01:15:00.664 "bdev_lvol_create_lvstore", 01:15:00.664 "bdev_raid_set_options", 01:15:00.665 "bdev_raid_remove_base_bdev", 01:15:00.665 "bdev_raid_add_base_bdev", 01:15:00.665 "bdev_raid_delete", 01:15:00.665 "bdev_raid_create", 01:15:00.665 "bdev_raid_get_bdevs", 01:15:00.665 "bdev_error_inject_error", 01:15:00.665 "bdev_error_delete", 01:15:00.665 "bdev_error_create", 01:15:00.665 "bdev_split_delete", 01:15:00.665 "bdev_split_create", 01:15:00.665 "bdev_delay_delete", 01:15:00.665 "bdev_delay_create", 01:15:00.665 "bdev_delay_update_latency", 01:15:00.665 "bdev_zone_block_delete", 01:15:00.665 "bdev_zone_block_create", 01:15:00.665 "blobfs_create", 01:15:00.665 "blobfs_detect", 01:15:00.665 "blobfs_set_cache_size", 01:15:00.665 "bdev_aio_delete", 01:15:00.665 "bdev_aio_rescan", 01:15:00.665 "bdev_aio_create", 01:15:00.665 "bdev_ftl_set_property", 01:15:00.665 "bdev_ftl_get_properties", 01:15:00.665 "bdev_ftl_get_stats", 01:15:00.665 "bdev_ftl_unmap", 01:15:00.665 "bdev_ftl_unload", 01:15:00.665 "bdev_ftl_delete", 01:15:00.665 "bdev_ftl_load", 01:15:00.665 "bdev_ftl_create", 01:15:00.665 "bdev_virtio_attach_controller", 01:15:00.665 "bdev_virtio_scsi_get_devices", 01:15:00.665 "bdev_virtio_detach_controller", 01:15:00.665 "bdev_virtio_blk_set_hotplug", 01:15:00.665 "bdev_iscsi_delete", 01:15:00.665 "bdev_iscsi_create", 01:15:00.665 "bdev_iscsi_set_options", 01:15:00.665 "bdev_uring_delete", 01:15:00.665 "bdev_uring_rescan", 01:15:00.665 "bdev_uring_create", 01:15:00.665 "accel_error_inject_error", 01:15:00.665 "ioat_scan_accel_module", 01:15:00.665 "dsa_scan_accel_module", 01:15:00.665 "iaa_scan_accel_module", 01:15:00.665 "keyring_file_remove_key", 01:15:00.665 "keyring_file_add_key", 01:15:00.665 "keyring_linux_set_options", 01:15:00.665 "fsdev_aio_delete", 01:15:00.665 "fsdev_aio_create", 01:15:00.665 "iscsi_get_histogram", 01:15:00.665 "iscsi_enable_histogram", 01:15:00.665 "iscsi_set_options", 01:15:00.665 "iscsi_get_auth_groups", 01:15:00.665 "iscsi_auth_group_remove_secret", 01:15:00.665 "iscsi_auth_group_add_secret", 01:15:00.665 "iscsi_delete_auth_group", 01:15:00.665 "iscsi_create_auth_group", 01:15:00.665 "iscsi_set_discovery_auth", 01:15:00.665 "iscsi_get_options", 01:15:00.665 "iscsi_target_node_request_logout", 01:15:00.665 "iscsi_target_node_set_redirect", 01:15:00.665 "iscsi_target_node_set_auth", 01:15:00.665 "iscsi_target_node_add_lun", 01:15:00.665 "iscsi_get_stats", 01:15:00.665 "iscsi_get_connections", 01:15:00.665 "iscsi_portal_group_set_auth", 01:15:00.665 "iscsi_start_portal_group", 01:15:00.665 "iscsi_delete_portal_group", 01:15:00.665 "iscsi_create_portal_group", 01:15:00.665 "iscsi_get_portal_groups", 01:15:00.665 "iscsi_delete_target_node", 01:15:00.665 "iscsi_target_node_remove_pg_ig_maps", 01:15:00.665 "iscsi_target_node_add_pg_ig_maps", 01:15:00.665 "iscsi_create_target_node", 01:15:00.665 "iscsi_get_target_nodes", 01:15:00.665 "iscsi_delete_initiator_group", 01:15:00.665 "iscsi_initiator_group_remove_initiators", 01:15:00.665 "iscsi_initiator_group_add_initiators", 01:15:00.665 "iscsi_create_initiator_group", 01:15:00.665 "iscsi_get_initiator_groups", 01:15:00.665 "nvmf_set_crdt", 01:15:00.665 "nvmf_set_config", 01:15:00.665 "nvmf_set_max_subsystems", 01:15:00.665 "nvmf_stop_mdns_prr", 01:15:00.665 "nvmf_publish_mdns_prr", 01:15:00.665 "nvmf_subsystem_get_listeners", 01:15:00.665 "nvmf_subsystem_get_qpairs", 01:15:00.665 "nvmf_subsystem_get_controllers", 01:15:00.665 "nvmf_get_stats", 01:15:00.665 "nvmf_get_transports", 01:15:00.665 "nvmf_create_transport", 01:15:00.665 "nvmf_get_targets", 01:15:00.665 "nvmf_delete_target", 01:15:00.665 "nvmf_create_target", 01:15:00.665 "nvmf_subsystem_allow_any_host", 01:15:00.665 "nvmf_subsystem_set_keys", 01:15:00.665 "nvmf_subsystem_remove_host", 01:15:00.665 "nvmf_subsystem_add_host", 01:15:00.665 "nvmf_ns_remove_host", 01:15:00.665 "nvmf_ns_add_host", 01:15:00.665 "nvmf_subsystem_remove_ns", 01:15:00.665 "nvmf_subsystem_set_ns_ana_group", 01:15:00.665 "nvmf_subsystem_add_ns", 01:15:00.665 "nvmf_subsystem_listener_set_ana_state", 01:15:00.665 "nvmf_discovery_get_referrals", 01:15:00.665 "nvmf_discovery_remove_referral", 01:15:00.665 "nvmf_discovery_add_referral", 01:15:00.665 "nvmf_subsystem_remove_listener", 01:15:00.665 "nvmf_subsystem_add_listener", 01:15:00.665 "nvmf_delete_subsystem", 01:15:00.665 "nvmf_create_subsystem", 01:15:00.665 "nvmf_get_subsystems", 01:15:00.665 "env_dpdk_get_mem_stats", 01:15:00.665 "nbd_get_disks", 01:15:00.665 "nbd_stop_disk", 01:15:00.665 "nbd_start_disk", 01:15:00.665 "ublk_recover_disk", 01:15:00.665 "ublk_get_disks", 01:15:00.665 "ublk_stop_disk", 01:15:00.665 "ublk_start_disk", 01:15:00.665 "ublk_destroy_target", 01:15:00.665 "ublk_create_target", 01:15:00.665 "virtio_blk_create_transport", 01:15:00.665 "virtio_blk_get_transports", 01:15:00.665 "vhost_controller_set_coalescing", 01:15:00.665 "vhost_get_controllers", 01:15:00.665 "vhost_delete_controller", 01:15:00.665 "vhost_create_blk_controller", 01:15:00.665 "vhost_scsi_controller_remove_target", 01:15:00.665 "vhost_scsi_controller_add_target", 01:15:00.665 "vhost_start_scsi_controller", 01:15:00.665 "vhost_create_scsi_controller", 01:15:00.665 "thread_set_cpumask", 01:15:00.665 "scheduler_set_options", 01:15:00.665 "framework_get_governor", 01:15:00.665 "framework_get_scheduler", 01:15:00.665 "framework_set_scheduler", 01:15:00.665 "framework_get_reactors", 01:15:00.665 "thread_get_io_channels", 01:15:00.665 "thread_get_pollers", 01:15:00.665 "thread_get_stats", 01:15:00.665 "framework_monitor_context_switch", 01:15:00.665 "spdk_kill_instance", 01:15:00.665 "log_enable_timestamps", 01:15:00.665 "log_get_flags", 01:15:00.665 "log_clear_flag", 01:15:00.665 "log_set_flag", 01:15:00.665 "log_get_level", 01:15:00.665 "log_set_level", 01:15:00.665 "log_get_print_level", 01:15:00.665 "log_set_print_level", 01:15:00.665 "framework_enable_cpumask_locks", 01:15:00.665 "framework_disable_cpumask_locks", 01:15:00.665 "framework_wait_init", 01:15:00.665 "framework_start_init", 01:15:00.665 "scsi_get_devices", 01:15:00.665 "bdev_get_histogram", 01:15:00.665 "bdev_enable_histogram", 01:15:00.665 "bdev_set_qos_limit", 01:15:00.665 "bdev_set_qd_sampling_period", 01:15:00.665 "bdev_get_bdevs", 01:15:00.665 "bdev_reset_iostat", 01:15:00.665 "bdev_get_iostat", 01:15:00.665 "bdev_examine", 01:15:00.665 "bdev_wait_for_examine", 01:15:00.665 "bdev_set_options", 01:15:00.665 "accel_get_stats", 01:15:00.665 "accel_set_options", 01:15:00.665 "accel_set_driver", 01:15:00.665 "accel_crypto_key_destroy", 01:15:00.665 "accel_crypto_keys_get", 01:15:00.665 "accel_crypto_key_create", 01:15:00.665 "accel_assign_opc", 01:15:00.665 "accel_get_module_info", 01:15:00.665 "accel_get_opc_assignments", 01:15:00.665 "vmd_rescan", 01:15:00.665 "vmd_remove_device", 01:15:00.665 "vmd_enable", 01:15:00.665 "sock_get_default_impl", 01:15:00.665 "sock_set_default_impl", 01:15:00.665 "sock_impl_set_options", 01:15:00.665 "sock_impl_get_options", 01:15:00.665 "iobuf_get_stats", 01:15:00.665 "iobuf_set_options", 01:15:00.665 "keyring_get_keys", 01:15:00.665 "framework_get_pci_devices", 01:15:00.665 "framework_get_config", 01:15:00.665 "framework_get_subsystems", 01:15:00.665 "fsdev_set_opts", 01:15:00.665 "fsdev_get_opts", 01:15:00.665 "trace_get_info", 01:15:00.665 "trace_get_tpoint_group_mask", 01:15:00.665 "trace_disable_tpoint_group", 01:15:00.665 "trace_enable_tpoint_group", 01:15:00.665 "trace_clear_tpoint_mask", 01:15:00.665 "trace_set_tpoint_mask", 01:15:00.665 "notify_get_notifications", 01:15:00.665 "notify_get_types", 01:15:00.665 "spdk_get_version", 01:15:00.665 "rpc_get_methods" 01:15:00.665 ] 01:15:00.665 05:09:42 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 01:15:00.665 05:09:42 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 01:15:00.665 05:09:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 01:15:00.665 05:09:42 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 01:15:00.665 05:09:42 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57945 01:15:00.665 05:09:42 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57945 ']' 01:15:00.665 05:09:42 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57945 01:15:00.665 05:09:42 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 01:15:00.665 05:09:42 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:15:00.665 05:09:42 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57945 01:15:00.666 05:09:43 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:15:00.666 05:09:43 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:15:00.666 killing process with pid 57945 01:15:00.666 05:09:43 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57945' 01:15:00.666 05:09:43 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57945 01:15:00.666 05:09:43 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57945 01:15:00.924 01:15:00.924 real 0m1.777s 01:15:00.924 user 0m3.063s 01:15:00.924 sys 0m0.487s 01:15:00.924 05:09:43 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 01:15:00.924 05:09:43 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 01:15:00.924 ************************************ 01:15:00.924 END TEST spdkcli_tcp 01:15:00.924 ************************************ 01:15:01.184 05:09:43 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 01:15:01.184 05:09:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:15:01.184 05:09:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:15:01.184 05:09:43 -- common/autotest_common.sh@10 -- # set +x 01:15:01.184 ************************************ 01:15:01.184 START TEST dpdk_mem_utility 01:15:01.184 ************************************ 01:15:01.184 05:09:43 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 01:15:01.184 * Looking for test storage... 01:15:01.185 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 01:15:01.185 05:09:43 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:15:01.185 05:09:43 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 01:15:01.185 05:09:43 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:15:01.185 05:09:43 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:15:01.185 05:09:43 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:15:01.185 05:09:43 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 01:15:01.185 05:09:43 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 01:15:01.185 05:09:43 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 01:15:01.185 05:09:43 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 01:15:01.185 05:09:43 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 01:15:01.185 05:09:43 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 01:15:01.185 05:09:43 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 01:15:01.185 05:09:43 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 01:15:01.185 05:09:43 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 01:15:01.185 05:09:43 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:15:01.185 05:09:43 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 01:15:01.185 05:09:43 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 01:15:01.185 05:09:43 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 01:15:01.185 05:09:43 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:15:01.185 05:09:43 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 01:15:01.185 05:09:43 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 01:15:01.185 05:09:43 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:15:01.185 05:09:43 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 01:15:01.185 05:09:43 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 01:15:01.185 05:09:43 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 01:15:01.185 05:09:43 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 01:15:01.185 05:09:43 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:15:01.185 05:09:43 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 01:15:01.185 05:09:43 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 01:15:01.185 05:09:43 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:15:01.185 05:09:43 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:15:01.185 05:09:43 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 01:15:01.185 05:09:43 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:15:01.185 05:09:43 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:15:01.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:01.185 --rc genhtml_branch_coverage=1 01:15:01.185 --rc genhtml_function_coverage=1 01:15:01.185 --rc genhtml_legend=1 01:15:01.185 --rc geninfo_all_blocks=1 01:15:01.185 --rc geninfo_unexecuted_blocks=1 01:15:01.185 01:15:01.185 ' 01:15:01.185 05:09:43 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:15:01.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:01.185 --rc genhtml_branch_coverage=1 01:15:01.185 --rc genhtml_function_coverage=1 01:15:01.185 --rc genhtml_legend=1 01:15:01.185 --rc geninfo_all_blocks=1 01:15:01.185 --rc geninfo_unexecuted_blocks=1 01:15:01.185 01:15:01.185 ' 01:15:01.185 05:09:43 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:15:01.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:01.185 --rc genhtml_branch_coverage=1 01:15:01.185 --rc genhtml_function_coverage=1 01:15:01.185 --rc genhtml_legend=1 01:15:01.185 --rc geninfo_all_blocks=1 01:15:01.185 --rc geninfo_unexecuted_blocks=1 01:15:01.185 01:15:01.185 ' 01:15:01.185 05:09:43 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:15:01.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:01.185 --rc genhtml_branch_coverage=1 01:15:01.185 --rc genhtml_function_coverage=1 01:15:01.185 --rc genhtml_legend=1 01:15:01.185 --rc geninfo_all_blocks=1 01:15:01.185 --rc geninfo_unexecuted_blocks=1 01:15:01.185 01:15:01.185 ' 01:15:01.185 05:09:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 01:15:01.185 05:09:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58044 01:15:01.185 05:09:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:15:01.185 05:09:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58044 01:15:01.185 05:09:43 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58044 ']' 01:15:01.185 05:09:43 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:15:01.185 05:09:43 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 01:15:01.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:15:01.185 05:09:43 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:15:01.185 05:09:43 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 01:15:01.185 05:09:43 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 01:15:01.443 [2024-12-09 05:09:43.693653] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:15:01.443 [2024-12-09 05:09:43.693726] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58044 ] 01:15:01.444 [2024-12-09 05:09:43.825421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:15:01.444 [2024-12-09 05:09:43.874971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:15:01.702 [2024-12-09 05:09:43.930296] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:15:02.279 05:09:44 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:15:02.279 05:09:44 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 01:15:02.279 05:09:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 01:15:02.279 05:09:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 01:15:02.279 05:09:44 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:02.279 05:09:44 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 01:15:02.279 { 01:15:02.279 "filename": "/tmp/spdk_mem_dump.txt" 01:15:02.279 } 01:15:02.279 05:09:44 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:02.279 05:09:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 01:15:02.279 DPDK memory size 818.000000 MiB in 1 heap(s) 01:15:02.279 1 heaps totaling size 818.000000 MiB 01:15:02.279 size: 818.000000 MiB heap id: 0 01:15:02.279 end heaps---------- 01:15:02.279 9 mempools totaling size 603.782043 MiB 01:15:02.279 size: 212.674988 MiB name: PDU_immediate_data_Pool 01:15:02.279 size: 158.602051 MiB name: PDU_data_out_Pool 01:15:02.279 size: 100.555481 MiB name: bdev_io_58044 01:15:02.279 size: 50.003479 MiB name: msgpool_58044 01:15:02.279 size: 36.509338 MiB name: fsdev_io_58044 01:15:02.279 size: 21.763794 MiB name: PDU_Pool 01:15:02.279 size: 19.513306 MiB name: SCSI_TASK_Pool 01:15:02.279 size: 4.133484 MiB name: evtpool_58044 01:15:02.279 size: 0.026123 MiB name: Session_Pool 01:15:02.279 end mempools------- 01:15:02.279 6 memzones totaling size 4.142822 MiB 01:15:02.279 size: 1.000366 MiB name: RG_ring_0_58044 01:15:02.279 size: 1.000366 MiB name: RG_ring_1_58044 01:15:02.279 size: 1.000366 MiB name: RG_ring_4_58044 01:15:02.279 size: 1.000366 MiB name: RG_ring_5_58044 01:15:02.279 size: 0.125366 MiB name: RG_ring_2_58044 01:15:02.279 size: 0.015991 MiB name: RG_ring_3_58044 01:15:02.279 end memzones------- 01:15:02.279 05:09:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 01:15:02.279 heap id: 0 total size: 818.000000 MiB number of busy elements: 319 number of free elements: 15 01:15:02.279 list of free elements. size: 10.802124 MiB 01:15:02.279 element at address: 0x200019200000 with size: 0.999878 MiB 01:15:02.279 element at address: 0x200019400000 with size: 0.999878 MiB 01:15:02.279 element at address: 0x200032000000 with size: 0.994446 MiB 01:15:02.279 element at address: 0x200000400000 with size: 0.993958 MiB 01:15:02.279 element at address: 0x200006400000 with size: 0.959839 MiB 01:15:02.279 element at address: 0x200012c00000 with size: 0.944275 MiB 01:15:02.279 element at address: 0x200019600000 with size: 0.936584 MiB 01:15:02.279 element at address: 0x200000200000 with size: 0.717346 MiB 01:15:02.279 element at address: 0x20001ae00000 with size: 0.567139 MiB 01:15:02.279 element at address: 0x20000a600000 with size: 0.488892 MiB 01:15:02.279 element at address: 0x200000c00000 with size: 0.486267 MiB 01:15:02.279 element at address: 0x200019800000 with size: 0.485657 MiB 01:15:02.279 element at address: 0x200003e00000 with size: 0.480286 MiB 01:15:02.279 element at address: 0x200028200000 with size: 0.395935 MiB 01:15:02.279 element at address: 0x200000800000 with size: 0.351746 MiB 01:15:02.279 list of standard malloc elements. size: 199.268982 MiB 01:15:02.279 element at address: 0x20000a7fff80 with size: 132.000122 MiB 01:15:02.279 element at address: 0x2000065fff80 with size: 64.000122 MiB 01:15:02.279 element at address: 0x2000192fff80 with size: 1.000122 MiB 01:15:02.279 element at address: 0x2000194fff80 with size: 1.000122 MiB 01:15:02.279 element at address: 0x2000196fff80 with size: 1.000122 MiB 01:15:02.279 element at address: 0x2000003d9f00 with size: 0.140747 MiB 01:15:02.279 element at address: 0x2000196eff00 with size: 0.062622 MiB 01:15:02.279 element at address: 0x2000003fdf80 with size: 0.007935 MiB 01:15:02.279 element at address: 0x2000196efdc0 with size: 0.000305 MiB 01:15:02.279 element at address: 0x2000002d7c40 with size: 0.000183 MiB 01:15:02.279 element at address: 0x2000003d9e40 with size: 0.000183 MiB 01:15:02.279 element at address: 0x2000004fe740 with size: 0.000183 MiB 01:15:02.279 element at address: 0x2000004fe800 with size: 0.000183 MiB 01:15:02.279 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 01:15:02.279 element at address: 0x2000004fe980 with size: 0.000183 MiB 01:15:02.279 element at address: 0x2000004fea40 with size: 0.000183 MiB 01:15:02.279 element at address: 0x2000004feb00 with size: 0.000183 MiB 01:15:02.279 element at address: 0x2000004febc0 with size: 0.000183 MiB 01:15:02.279 element at address: 0x2000004fec80 with size: 0.000183 MiB 01:15:02.279 element at address: 0x2000004fed40 with size: 0.000183 MiB 01:15:02.279 element at address: 0x2000004fee00 with size: 0.000183 MiB 01:15:02.279 element at address: 0x2000004feec0 with size: 0.000183 MiB 01:15:02.279 element at address: 0x2000004fef80 with size: 0.000183 MiB 01:15:02.279 element at address: 0x2000004ff040 with size: 0.000183 MiB 01:15:02.279 element at address: 0x2000004ff100 with size: 0.000183 MiB 01:15:02.279 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 01:15:02.279 element at address: 0x2000004ff280 with size: 0.000183 MiB 01:15:02.279 element at address: 0x2000004ff340 with size: 0.000183 MiB 01:15:02.279 element at address: 0x2000004ff400 with size: 0.000183 MiB 01:15:02.279 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 01:15:02.279 element at address: 0x2000004ff580 with size: 0.000183 MiB 01:15:02.279 element at address: 0x2000004ff640 with size: 0.000183 MiB 01:15:02.279 element at address: 0x2000004ff700 with size: 0.000183 MiB 01:15:02.279 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 01:15:02.279 element at address: 0x2000004ff880 with size: 0.000183 MiB 01:15:02.279 element at address: 0x2000004ff940 with size: 0.000183 MiB 01:15:02.279 element at address: 0x2000004ffa00 with size: 0.000183 MiB 01:15:02.279 element at address: 0x2000004ffac0 with size: 0.000183 MiB 01:15:02.279 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 01:15:02.279 element at address: 0x2000004ffd80 with size: 0.000183 MiB 01:15:02.279 element at address: 0x2000004ffe40 with size: 0.000183 MiB 01:15:02.279 element at address: 0x20000085a0c0 with size: 0.000183 MiB 01:15:02.279 element at address: 0x20000085a2c0 with size: 0.000183 MiB 01:15:02.279 element at address: 0x20000085e580 with size: 0.000183 MiB 01:15:02.279 element at address: 0x20000087e840 with size: 0.000183 MiB 01:15:02.279 element at address: 0x20000087e900 with size: 0.000183 MiB 01:15:02.279 element at address: 0x20000087e9c0 with size: 0.000183 MiB 01:15:02.279 element at address: 0x20000087ea80 with size: 0.000183 MiB 01:15:02.279 element at address: 0x20000087eb40 with size: 0.000183 MiB 01:15:02.279 element at address: 0x20000087ec00 with size: 0.000183 MiB 01:15:02.279 element at address: 0x20000087ecc0 with size: 0.000183 MiB 01:15:02.279 element at address: 0x20000087ed80 with size: 0.000183 MiB 01:15:02.279 element at address: 0x20000087ee40 with size: 0.000183 MiB 01:15:02.279 element at address: 0x20000087ef00 with size: 0.000183 MiB 01:15:02.279 element at address: 0x20000087efc0 with size: 0.000183 MiB 01:15:02.279 element at address: 0x20000087f080 with size: 0.000183 MiB 01:15:02.279 element at address: 0x20000087f140 with size: 0.000183 MiB 01:15:02.279 element at address: 0x20000087f200 with size: 0.000183 MiB 01:15:02.279 element at address: 0x20000087f2c0 with size: 0.000183 MiB 01:15:02.279 element at address: 0x20000087f380 with size: 0.000183 MiB 01:15:02.279 element at address: 0x20000087f440 with size: 0.000183 MiB 01:15:02.279 element at address: 0x20000087f500 with size: 0.000183 MiB 01:15:02.279 element at address: 0x20000087f5c0 with size: 0.000183 MiB 01:15:02.279 element at address: 0x20000087f680 with size: 0.000183 MiB 01:15:02.279 element at address: 0x2000008ff940 with size: 0.000183 MiB 01:15:02.279 element at address: 0x2000008ffb40 with size: 0.000183 MiB 01:15:02.279 element at address: 0x200000c7c7c0 with size: 0.000183 MiB 01:15:02.279 element at address: 0x200000c7c880 with size: 0.000183 MiB 01:15:02.279 element at address: 0x200000c7c940 with size: 0.000183 MiB 01:15:02.279 element at address: 0x200000c7ca00 with size: 0.000183 MiB 01:15:02.279 element at address: 0x200000c7cac0 with size: 0.000183 MiB 01:15:02.279 element at address: 0x200000c7cb80 with size: 0.000183 MiB 01:15:02.279 element at address: 0x200000c7cc40 with size: 0.000183 MiB 01:15:02.280 element at address: 0x200000c7cd00 with size: 0.000183 MiB 01:15:02.280 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 01:15:02.280 element at address: 0x200000c7ce80 with size: 0.000183 MiB 01:15:02.280 element at address: 0x200000c7cf40 with size: 0.000183 MiB 01:15:02.280 element at address: 0x200000c7d000 with size: 0.000183 MiB 01:15:02.280 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 01:15:02.280 element at address: 0x200000c7d180 with size: 0.000183 MiB 01:15:02.280 element at address: 0x200000c7d240 with size: 0.000183 MiB 01:15:02.280 element at address: 0x200000c7d300 with size: 0.000183 MiB 01:15:02.280 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 01:15:02.280 element at address: 0x200000c7d480 with size: 0.000183 MiB 01:15:02.280 element at address: 0x200000c7d540 with size: 0.000183 MiB 01:15:02.280 element at address: 0x200000c7d600 with size: 0.000183 MiB 01:15:02.280 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 01:15:02.280 element at address: 0x200000c7d780 with size: 0.000183 MiB 01:15:02.280 element at address: 0x200000c7d840 with size: 0.000183 MiB 01:15:02.280 element at address: 0x200000c7d900 with size: 0.000183 MiB 01:15:02.280 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 01:15:02.280 element at address: 0x200000c7da80 with size: 0.000183 MiB 01:15:02.280 element at address: 0x200000c7db40 with size: 0.000183 MiB 01:15:02.280 element at address: 0x200000c7dc00 with size: 0.000183 MiB 01:15:02.280 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 01:15:02.280 element at address: 0x200000c7dd80 with size: 0.000183 MiB 01:15:02.280 element at address: 0x200000c7de40 with size: 0.000183 MiB 01:15:02.280 element at address: 0x200000c7df00 with size: 0.000183 MiB 01:15:02.280 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 01:15:02.280 element at address: 0x200000c7e080 with size: 0.000183 MiB 01:15:02.280 element at address: 0x200000c7e140 with size: 0.000183 MiB 01:15:02.280 element at address: 0x200000c7e200 with size: 0.000183 MiB 01:15:02.280 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 01:15:02.280 element at address: 0x200000c7e380 with size: 0.000183 MiB 01:15:02.280 element at address: 0x200000c7e440 with size: 0.000183 MiB 01:15:02.280 element at address: 0x200000c7e500 with size: 0.000183 MiB 01:15:02.280 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 01:15:02.280 element at address: 0x200000c7e680 with size: 0.000183 MiB 01:15:02.280 element at address: 0x200000c7e740 with size: 0.000183 MiB 01:15:02.280 element at address: 0x200000c7e800 with size: 0.000183 MiB 01:15:02.280 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 01:15:02.280 element at address: 0x200000c7e980 with size: 0.000183 MiB 01:15:02.280 element at address: 0x200000c7ea40 with size: 0.000183 MiB 01:15:02.280 element at address: 0x200000c7eb00 with size: 0.000183 MiB 01:15:02.280 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 01:15:02.280 element at address: 0x200000c7ec80 with size: 0.000183 MiB 01:15:02.280 element at address: 0x200000c7ed40 with size: 0.000183 MiB 01:15:02.280 element at address: 0x200000cff000 with size: 0.000183 MiB 01:15:02.280 element at address: 0x200000cff0c0 with size: 0.000183 MiB 01:15:02.280 element at address: 0x200003e7af40 with size: 0.000183 MiB 01:15:02.280 element at address: 0x200003e7b000 with size: 0.000183 MiB 01:15:02.280 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 01:15:02.280 element at address: 0x200003e7b180 with size: 0.000183 MiB 01:15:02.280 element at address: 0x200003e7b240 with size: 0.000183 MiB 01:15:02.280 element at address: 0x200003e7b300 with size: 0.000183 MiB 01:15:02.280 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 01:15:02.280 element at address: 0x200003e7b480 with size: 0.000183 MiB 01:15:02.280 element at address: 0x200003e7b540 with size: 0.000183 MiB 01:15:02.280 element at address: 0x200003e7b600 with size: 0.000183 MiB 01:15:02.280 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 01:15:02.280 element at address: 0x200003efb980 with size: 0.000183 MiB 01:15:02.280 element at address: 0x2000064fdd80 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20000a67d280 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20000a67d340 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20000a67d400 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20000a67d580 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20000a67d640 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20000a67d700 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20000a67d880 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20000a67d940 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20000a67da00 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20000a67dac0 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 01:15:02.280 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 01:15:02.280 element at address: 0x2000196efc40 with size: 0.000183 MiB 01:15:02.280 element at address: 0x2000196efd00 with size: 0.000183 MiB 01:15:02.280 element at address: 0x2000198bc740 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20001ae91300 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20001ae913c0 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20001ae91480 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20001ae91540 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20001ae91600 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20001ae916c0 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20001ae91780 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20001ae91840 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20001ae91900 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20001ae919c0 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20001ae91a80 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20001ae91b40 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20001ae91c00 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20001ae91cc0 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20001ae91d80 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20001ae91e40 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20001ae91f00 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20001ae91fc0 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20001ae92080 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20001ae92140 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20001ae92200 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20001ae922c0 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20001ae92380 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20001ae92440 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20001ae92500 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20001ae925c0 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20001ae92680 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20001ae92740 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20001ae92800 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20001ae928c0 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20001ae92980 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20001ae92a40 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20001ae92b00 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20001ae92bc0 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20001ae92c80 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20001ae92d40 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20001ae92e00 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20001ae92ec0 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20001ae92f80 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20001ae93040 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20001ae93100 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20001ae931c0 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20001ae93280 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20001ae93340 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20001ae93400 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20001ae934c0 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20001ae93580 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20001ae93640 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20001ae93700 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20001ae937c0 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20001ae93880 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20001ae93940 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20001ae93a00 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20001ae93ac0 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20001ae93b80 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20001ae93c40 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20001ae93d00 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20001ae93dc0 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20001ae93e80 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20001ae93f40 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20001ae94000 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20001ae940c0 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20001ae94180 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20001ae94240 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20001ae94300 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20001ae943c0 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20001ae94480 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20001ae94540 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20001ae94600 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20001ae946c0 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20001ae94780 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20001ae94840 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20001ae94900 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20001ae949c0 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20001ae94a80 with size: 0.000183 MiB 01:15:02.280 element at address: 0x20001ae94b40 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20001ae94c00 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20001ae94cc0 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20001ae94d80 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20001ae94e40 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20001ae94f00 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20001ae94fc0 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20001ae95080 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20001ae95140 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20001ae95200 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20001ae952c0 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20001ae95380 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20001ae95440 with size: 0.000183 MiB 01:15:02.281 element at address: 0x2000282655c0 with size: 0.000183 MiB 01:15:02.281 element at address: 0x200028265680 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20002826c280 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20002826c480 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20002826c540 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20002826c600 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20002826c6c0 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20002826c780 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20002826c840 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20002826c900 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20002826c9c0 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20002826ca80 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20002826cb40 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20002826cc00 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20002826ccc0 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20002826cd80 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20002826ce40 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20002826cf00 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20002826cfc0 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20002826d080 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20002826d140 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20002826d200 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20002826d2c0 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20002826d380 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20002826d440 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20002826d500 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20002826d5c0 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20002826d680 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20002826d740 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20002826d800 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20002826d8c0 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20002826d980 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20002826da40 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20002826db00 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20002826dbc0 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20002826dc80 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20002826dd40 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20002826de00 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20002826dec0 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20002826df80 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20002826e040 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20002826e100 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20002826e1c0 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20002826e280 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20002826e340 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20002826e400 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20002826e4c0 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20002826e580 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20002826e640 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20002826e700 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20002826e7c0 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20002826e880 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20002826e940 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20002826ea00 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20002826eac0 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20002826eb80 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20002826ec40 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20002826ed00 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20002826edc0 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20002826ee80 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20002826ef40 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20002826f000 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20002826f0c0 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20002826f180 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20002826f240 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20002826f300 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20002826f3c0 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20002826f480 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20002826f540 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20002826f600 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20002826f6c0 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20002826f780 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20002826f840 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20002826f900 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20002826f9c0 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20002826fa80 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20002826fb40 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20002826fc00 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20002826fcc0 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20002826fd80 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20002826fe40 with size: 0.000183 MiB 01:15:02.281 element at address: 0x20002826ff00 with size: 0.000183 MiB 01:15:02.281 list of memzone associated elements. size: 607.928894 MiB 01:15:02.281 element at address: 0x20001ae95500 with size: 211.416748 MiB 01:15:02.281 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 01:15:02.281 element at address: 0x20002826ffc0 with size: 157.562561 MiB 01:15:02.281 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 01:15:02.281 element at address: 0x200012df1e80 with size: 100.055054 MiB 01:15:02.281 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_58044_0 01:15:02.281 element at address: 0x200000dff380 with size: 48.003052 MiB 01:15:02.281 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58044_0 01:15:02.281 element at address: 0x200003ffdb80 with size: 36.008911 MiB 01:15:02.281 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58044_0 01:15:02.281 element at address: 0x2000199be940 with size: 20.255554 MiB 01:15:02.281 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 01:15:02.281 element at address: 0x2000321feb40 with size: 18.005066 MiB 01:15:02.281 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 01:15:02.281 element at address: 0x2000004fff00 with size: 3.000244 MiB 01:15:02.281 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58044_0 01:15:02.281 element at address: 0x2000009ffe00 with size: 2.000488 MiB 01:15:02.281 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58044 01:15:02.281 element at address: 0x2000002d7d00 with size: 1.008118 MiB 01:15:02.281 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58044 01:15:02.281 element at address: 0x20000a6fde40 with size: 1.008118 MiB 01:15:02.281 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 01:15:02.281 element at address: 0x2000198bc800 with size: 1.008118 MiB 01:15:02.281 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 01:15:02.281 element at address: 0x2000064fde40 with size: 1.008118 MiB 01:15:02.281 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 01:15:02.281 element at address: 0x200003efba40 with size: 1.008118 MiB 01:15:02.281 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 01:15:02.281 element at address: 0x200000cff180 with size: 1.000488 MiB 01:15:02.281 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58044 01:15:02.281 element at address: 0x2000008ffc00 with size: 1.000488 MiB 01:15:02.281 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58044 01:15:02.281 element at address: 0x200012cf1c80 with size: 1.000488 MiB 01:15:02.281 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58044 01:15:02.281 element at address: 0x2000320fe940 with size: 1.000488 MiB 01:15:02.281 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58044 01:15:02.281 element at address: 0x20000087f740 with size: 0.500488 MiB 01:15:02.281 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58044 01:15:02.281 element at address: 0x200000c7ee00 with size: 0.500488 MiB 01:15:02.281 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58044 01:15:02.281 element at address: 0x20000a67db80 with size: 0.500488 MiB 01:15:02.281 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 01:15:02.281 element at address: 0x200003e7b780 with size: 0.500488 MiB 01:15:02.281 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 01:15:02.281 element at address: 0x20001987c540 with size: 0.250488 MiB 01:15:02.281 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 01:15:02.281 element at address: 0x2000002b7a40 with size: 0.125488 MiB 01:15:02.281 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58044 01:15:02.281 element at address: 0x20000085e640 with size: 0.125488 MiB 01:15:02.281 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58044 01:15:02.282 element at address: 0x2000064f5b80 with size: 0.031738 MiB 01:15:02.282 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 01:15:02.282 element at address: 0x200028265740 with size: 0.023743 MiB 01:15:02.282 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 01:15:02.282 element at address: 0x20000085a380 with size: 0.016113 MiB 01:15:02.282 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58044 01:15:02.282 element at address: 0x20002826b880 with size: 0.002441 MiB 01:15:02.282 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 01:15:02.282 element at address: 0x2000004ffb80 with size: 0.000305 MiB 01:15:02.282 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58044 01:15:02.282 element at address: 0x2000008ffa00 with size: 0.000305 MiB 01:15:02.282 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58044 01:15:02.282 element at address: 0x20000085a180 with size: 0.000305 MiB 01:15:02.282 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58044 01:15:02.282 element at address: 0x20002826c340 with size: 0.000305 MiB 01:15:02.282 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 01:15:02.282 05:09:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 01:15:02.282 05:09:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58044 01:15:02.282 05:09:44 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58044 ']' 01:15:02.282 05:09:44 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58044 01:15:02.282 05:09:44 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 01:15:02.282 05:09:44 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:15:02.282 05:09:44 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58044 01:15:02.282 05:09:44 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:15:02.282 05:09:44 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:15:02.282 05:09:44 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58044' 01:15:02.282 killing process with pid 58044 01:15:02.282 05:09:44 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58044 01:15:02.282 05:09:44 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58044 01:15:02.849 01:15:02.849 real 0m1.657s 01:15:02.849 user 0m1.697s 01:15:02.849 sys 0m0.429s 01:15:02.849 05:09:45 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 01:15:02.849 05:09:45 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 01:15:02.849 ************************************ 01:15:02.849 END TEST dpdk_mem_utility 01:15:02.849 ************************************ 01:15:02.849 05:09:45 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 01:15:02.849 05:09:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:15:02.849 05:09:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:15:02.849 05:09:45 -- common/autotest_common.sh@10 -- # set +x 01:15:02.849 ************************************ 01:15:02.849 START TEST event 01:15:02.849 ************************************ 01:15:02.849 05:09:45 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 01:15:02.849 * Looking for test storage... 01:15:02.849 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 01:15:02.849 05:09:45 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:15:02.849 05:09:45 event -- common/autotest_common.sh@1693 -- # lcov --version 01:15:02.849 05:09:45 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:15:03.107 05:09:45 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:15:03.107 05:09:45 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:15:03.107 05:09:45 event -- scripts/common.sh@333 -- # local ver1 ver1_l 01:15:03.107 05:09:45 event -- scripts/common.sh@334 -- # local ver2 ver2_l 01:15:03.107 05:09:45 event -- scripts/common.sh@336 -- # IFS=.-: 01:15:03.107 05:09:45 event -- scripts/common.sh@336 -- # read -ra ver1 01:15:03.107 05:09:45 event -- scripts/common.sh@337 -- # IFS=.-: 01:15:03.107 05:09:45 event -- scripts/common.sh@337 -- # read -ra ver2 01:15:03.107 05:09:45 event -- scripts/common.sh@338 -- # local 'op=<' 01:15:03.107 05:09:45 event -- scripts/common.sh@340 -- # ver1_l=2 01:15:03.107 05:09:45 event -- scripts/common.sh@341 -- # ver2_l=1 01:15:03.107 05:09:45 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:15:03.107 05:09:45 event -- scripts/common.sh@344 -- # case "$op" in 01:15:03.107 05:09:45 event -- scripts/common.sh@345 -- # : 1 01:15:03.107 05:09:45 event -- scripts/common.sh@364 -- # (( v = 0 )) 01:15:03.107 05:09:45 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:15:03.107 05:09:45 event -- scripts/common.sh@365 -- # decimal 1 01:15:03.107 05:09:45 event -- scripts/common.sh@353 -- # local d=1 01:15:03.107 05:09:45 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:15:03.107 05:09:45 event -- scripts/common.sh@355 -- # echo 1 01:15:03.107 05:09:45 event -- scripts/common.sh@365 -- # ver1[v]=1 01:15:03.107 05:09:45 event -- scripts/common.sh@366 -- # decimal 2 01:15:03.107 05:09:45 event -- scripts/common.sh@353 -- # local d=2 01:15:03.107 05:09:45 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:15:03.107 05:09:45 event -- scripts/common.sh@355 -- # echo 2 01:15:03.107 05:09:45 event -- scripts/common.sh@366 -- # ver2[v]=2 01:15:03.107 05:09:45 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:15:03.107 05:09:45 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:15:03.107 05:09:45 event -- scripts/common.sh@368 -- # return 0 01:15:03.107 05:09:45 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:15:03.107 05:09:45 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:15:03.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:03.107 --rc genhtml_branch_coverage=1 01:15:03.107 --rc genhtml_function_coverage=1 01:15:03.107 --rc genhtml_legend=1 01:15:03.107 --rc geninfo_all_blocks=1 01:15:03.107 --rc geninfo_unexecuted_blocks=1 01:15:03.107 01:15:03.107 ' 01:15:03.107 05:09:45 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:15:03.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:03.107 --rc genhtml_branch_coverage=1 01:15:03.107 --rc genhtml_function_coverage=1 01:15:03.107 --rc genhtml_legend=1 01:15:03.107 --rc geninfo_all_blocks=1 01:15:03.107 --rc geninfo_unexecuted_blocks=1 01:15:03.107 01:15:03.107 ' 01:15:03.107 05:09:45 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:15:03.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:03.107 --rc genhtml_branch_coverage=1 01:15:03.107 --rc genhtml_function_coverage=1 01:15:03.107 --rc genhtml_legend=1 01:15:03.107 --rc geninfo_all_blocks=1 01:15:03.107 --rc geninfo_unexecuted_blocks=1 01:15:03.107 01:15:03.107 ' 01:15:03.107 05:09:45 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:15:03.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:03.107 --rc genhtml_branch_coverage=1 01:15:03.107 --rc genhtml_function_coverage=1 01:15:03.107 --rc genhtml_legend=1 01:15:03.107 --rc geninfo_all_blocks=1 01:15:03.107 --rc geninfo_unexecuted_blocks=1 01:15:03.107 01:15:03.107 ' 01:15:03.107 05:09:45 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 01:15:03.107 05:09:45 event -- bdev/nbd_common.sh@6 -- # set -e 01:15:03.107 05:09:45 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 01:15:03.107 05:09:45 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 01:15:03.107 05:09:45 event -- common/autotest_common.sh@1111 -- # xtrace_disable 01:15:03.107 05:09:45 event -- common/autotest_common.sh@10 -- # set +x 01:15:03.107 ************************************ 01:15:03.107 START TEST event_perf 01:15:03.107 ************************************ 01:15:03.107 05:09:45 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 01:15:03.107 Running I/O for 1 seconds...[2024-12-09 05:09:45.384423] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:15:03.107 [2024-12-09 05:09:45.384510] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58129 ] 01:15:03.108 [2024-12-09 05:09:45.541422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:15:03.366 [2024-12-09 05:09:45.590742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:15:03.366 [2024-12-09 05:09:45.591118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:15:03.366 [2024-12-09 05:09:45.590935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:15:03.366 Running I/O for 1 seconds...[2024-12-09 05:09:45.591122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:15:04.301 01:15:04.301 lcore 0: 201128 01:15:04.301 lcore 1: 201128 01:15:04.301 lcore 2: 201128 01:15:04.301 lcore 3: 201128 01:15:04.301 done. 01:15:04.301 01:15:04.301 real 0m1.309s 01:15:04.301 user 0m4.144s 01:15:04.301 sys 0m0.044s 01:15:04.301 05:09:46 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 01:15:04.301 05:09:46 event.event_perf -- common/autotest_common.sh@10 -- # set +x 01:15:04.301 ************************************ 01:15:04.301 END TEST event_perf 01:15:04.301 ************************************ 01:15:04.301 05:09:46 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 01:15:04.301 05:09:46 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:15:04.301 05:09:46 event -- common/autotest_common.sh@1111 -- # xtrace_disable 01:15:04.301 05:09:46 event -- common/autotest_common.sh@10 -- # set +x 01:15:04.301 ************************************ 01:15:04.301 START TEST event_reactor 01:15:04.301 ************************************ 01:15:04.301 05:09:46 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 01:15:04.559 [2024-12-09 05:09:46.766626] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:15:04.559 [2024-12-09 05:09:46.766819] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58162 ] 01:15:04.559 [2024-12-09 05:09:46.909862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:15:04.559 [2024-12-09 05:09:46.963906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:15:05.935 test_start 01:15:05.935 oneshot 01:15:05.935 tick 100 01:15:05.935 tick 100 01:15:05.935 tick 250 01:15:05.935 tick 100 01:15:05.935 tick 100 01:15:05.935 tick 100 01:15:05.935 tick 250 01:15:05.935 tick 500 01:15:05.935 tick 100 01:15:05.935 tick 100 01:15:05.935 tick 250 01:15:05.935 tick 100 01:15:05.935 tick 100 01:15:05.935 test_end 01:15:05.935 01:15:05.935 real 0m1.313s 01:15:05.935 user 0m1.168s 01:15:05.935 sys 0m0.039s 01:15:05.935 ************************************ 01:15:05.935 END TEST event_reactor 01:15:05.935 ************************************ 01:15:05.935 05:09:48 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 01:15:05.935 05:09:48 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 01:15:05.935 05:09:48 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 01:15:05.935 05:09:48 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:15:05.935 05:09:48 event -- common/autotest_common.sh@1111 -- # xtrace_disable 01:15:05.935 05:09:48 event -- common/autotest_common.sh@10 -- # set +x 01:15:05.935 ************************************ 01:15:05.935 START TEST event_reactor_perf 01:15:05.935 ************************************ 01:15:05.935 05:09:48 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 01:15:05.935 [2024-12-09 05:09:48.146380] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:15:05.936 [2024-12-09 05:09:48.147180] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58198 ] 01:15:05.936 [2024-12-09 05:09:48.304986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:15:05.936 [2024-12-09 05:09:48.355141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:15:07.357 test_start 01:15:07.357 test_end 01:15:07.357 Performance: 478183 events per second 01:15:07.357 ************************************ 01:15:07.357 END TEST event_reactor_perf 01:15:07.357 ************************************ 01:15:07.357 01:15:07.357 real 0m1.322s 01:15:07.357 user 0m1.173s 01:15:07.357 sys 0m0.042s 01:15:07.357 05:09:49 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 01:15:07.357 05:09:49 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 01:15:07.357 05:09:49 event -- event/event.sh@49 -- # uname -s 01:15:07.357 05:09:49 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 01:15:07.358 05:09:49 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 01:15:07.358 05:09:49 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:15:07.358 05:09:49 event -- common/autotest_common.sh@1111 -- # xtrace_disable 01:15:07.358 05:09:49 event -- common/autotest_common.sh@10 -- # set +x 01:15:07.358 ************************************ 01:15:07.358 START TEST event_scheduler 01:15:07.358 ************************************ 01:15:07.358 05:09:49 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 01:15:07.358 * Looking for test storage... 01:15:07.358 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 01:15:07.358 05:09:49 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:15:07.358 05:09:49 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 01:15:07.358 05:09:49 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:15:07.358 05:09:49 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:15:07.358 05:09:49 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:15:07.358 05:09:49 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 01:15:07.358 05:09:49 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 01:15:07.358 05:09:49 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 01:15:07.358 05:09:49 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 01:15:07.358 05:09:49 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 01:15:07.358 05:09:49 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 01:15:07.358 05:09:49 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 01:15:07.358 05:09:49 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 01:15:07.358 05:09:49 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 01:15:07.358 05:09:49 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:15:07.358 05:09:49 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 01:15:07.358 05:09:49 event.event_scheduler -- scripts/common.sh@345 -- # : 1 01:15:07.358 05:09:49 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 01:15:07.358 05:09:49 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:15:07.358 05:09:49 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 01:15:07.358 05:09:49 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 01:15:07.358 05:09:49 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:15:07.358 05:09:49 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 01:15:07.358 05:09:49 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 01:15:07.358 05:09:49 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 01:15:07.358 05:09:49 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 01:15:07.358 05:09:49 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:15:07.358 05:09:49 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 01:15:07.358 05:09:49 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 01:15:07.358 05:09:49 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:15:07.358 05:09:49 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:15:07.358 05:09:49 event.event_scheduler -- scripts/common.sh@368 -- # return 0 01:15:07.358 05:09:49 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:15:07.358 05:09:49 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:15:07.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:07.358 --rc genhtml_branch_coverage=1 01:15:07.358 --rc genhtml_function_coverage=1 01:15:07.358 --rc genhtml_legend=1 01:15:07.358 --rc geninfo_all_blocks=1 01:15:07.358 --rc geninfo_unexecuted_blocks=1 01:15:07.358 01:15:07.358 ' 01:15:07.358 05:09:49 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:15:07.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:07.358 --rc genhtml_branch_coverage=1 01:15:07.358 --rc genhtml_function_coverage=1 01:15:07.358 --rc genhtml_legend=1 01:15:07.358 --rc geninfo_all_blocks=1 01:15:07.358 --rc geninfo_unexecuted_blocks=1 01:15:07.358 01:15:07.358 ' 01:15:07.358 05:09:49 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:15:07.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:07.358 --rc genhtml_branch_coverage=1 01:15:07.358 --rc genhtml_function_coverage=1 01:15:07.358 --rc genhtml_legend=1 01:15:07.358 --rc geninfo_all_blocks=1 01:15:07.358 --rc geninfo_unexecuted_blocks=1 01:15:07.358 01:15:07.358 ' 01:15:07.358 05:09:49 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:15:07.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:07.358 --rc genhtml_branch_coverage=1 01:15:07.358 --rc genhtml_function_coverage=1 01:15:07.358 --rc genhtml_legend=1 01:15:07.358 --rc geninfo_all_blocks=1 01:15:07.358 --rc geninfo_unexecuted_blocks=1 01:15:07.358 01:15:07.358 ' 01:15:07.358 05:09:49 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 01:15:07.358 05:09:49 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58267 01:15:07.358 05:09:49 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 01:15:07.358 05:09:49 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 01:15:07.358 05:09:49 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58267 01:15:07.358 05:09:49 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58267 ']' 01:15:07.358 05:09:49 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:15:07.358 05:09:49 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 01:15:07.358 05:09:49 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:15:07.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:15:07.358 05:09:49 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 01:15:07.358 05:09:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 01:15:07.358 [2024-12-09 05:09:49.779248] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:15:07.358 [2024-12-09 05:09:49.779433] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58267 ] 01:15:07.618 [2024-12-09 05:09:49.921582] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:15:07.618 [2024-12-09 05:09:49.979940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:15:07.618 [2024-12-09 05:09:49.980131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:15:07.618 [2024-12-09 05:09:49.980303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:15:07.618 [2024-12-09 05:09:49.980320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:15:08.559 05:09:50 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:15:08.559 05:09:50 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 01:15:08.559 05:09:50 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 01:15:08.559 05:09:50 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:08.559 05:09:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 01:15:08.559 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 01:15:08.559 POWER: Cannot set governor of lcore 0 to userspace 01:15:08.559 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 01:15:08.559 POWER: Cannot set governor of lcore 0 to performance 01:15:08.559 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 01:15:08.559 POWER: Cannot set governor of lcore 0 to userspace 01:15:08.559 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 01:15:08.559 POWER: Cannot set governor of lcore 0 to userspace 01:15:08.559 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 01:15:08.559 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 01:15:08.559 POWER: Unable to set Power Management Environment for lcore 0 01:15:08.559 [2024-12-09 05:09:50.700774] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 01:15:08.559 [2024-12-09 05:09:50.700803] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 01:15:08.559 [2024-12-09 05:09:50.700827] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 01:15:08.559 [2024-12-09 05:09:50.700851] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 01:15:08.559 [2024-12-09 05:09:50.700873] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 01:15:08.559 [2024-12-09 05:09:50.700903] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 01:15:08.559 05:09:50 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:08.559 05:09:50 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 01:15:08.559 05:09:50 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:08.559 05:09:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 01:15:08.559 [2024-12-09 05:09:50.749474] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:15:08.559 [2024-12-09 05:09:50.780889] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 01:15:08.559 05:09:50 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:08.559 05:09:50 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 01:15:08.559 05:09:50 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:15:08.559 05:09:50 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 01:15:08.559 05:09:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 01:15:08.559 ************************************ 01:15:08.559 START TEST scheduler_create_thread 01:15:08.559 ************************************ 01:15:08.559 05:09:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 01:15:08.559 05:09:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 01:15:08.559 05:09:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:08.559 05:09:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:15:08.559 2 01:15:08.559 05:09:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:08.559 05:09:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 01:15:08.559 05:09:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:08.559 05:09:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:15:08.559 3 01:15:08.559 05:09:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:08.559 05:09:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 01:15:08.559 05:09:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:08.559 05:09:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:15:08.559 4 01:15:08.559 05:09:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:08.559 05:09:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 01:15:08.559 05:09:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:08.559 05:09:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:15:08.559 5 01:15:08.559 05:09:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:08.559 05:09:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 01:15:08.560 05:09:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:08.560 05:09:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:15:08.560 6 01:15:08.560 05:09:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:08.560 05:09:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 01:15:08.560 05:09:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:08.560 05:09:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:15:08.560 7 01:15:08.560 05:09:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:08.560 05:09:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 01:15:08.560 05:09:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:08.560 05:09:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:15:08.560 8 01:15:08.560 05:09:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:08.560 05:09:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 01:15:08.560 05:09:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:08.560 05:09:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:15:08.560 9 01:15:08.560 05:09:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:08.560 05:09:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 01:15:08.560 05:09:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:08.560 05:09:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:15:08.560 10 01:15:08.560 05:09:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:08.560 05:09:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 01:15:08.560 05:09:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:08.560 05:09:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:15:08.560 05:09:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:08.560 05:09:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 01:15:08.560 05:09:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 01:15:08.560 05:09:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:08.560 05:09:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:15:08.560 05:09:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:08.560 05:09:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 01:15:08.560 05:09:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:08.560 05:09:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:15:09.130 05:09:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:09.130 05:09:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 01:15:09.130 05:09:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 01:15:09.130 05:09:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:09.130 05:09:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:15:10.512 ************************************ 01:15:10.512 END TEST scheduler_create_thread 01:15:10.512 ************************************ 01:15:10.512 05:09:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:10.512 01:15:10.512 real 0m1.746s 01:15:10.512 user 0m0.025s 01:15:10.512 sys 0m0.007s 01:15:10.512 05:09:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 01:15:10.512 05:09:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 01:15:10.512 05:09:52 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 01:15:10.512 05:09:52 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58267 01:15:10.512 05:09:52 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58267 ']' 01:15:10.512 05:09:52 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58267 01:15:10.512 05:09:52 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 01:15:10.512 05:09:52 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:15:10.512 05:09:52 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58267 01:15:10.512 killing process with pid 58267 01:15:10.512 05:09:52 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 01:15:10.512 05:09:52 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 01:15:10.512 05:09:52 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58267' 01:15:10.512 05:09:52 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58267 01:15:10.512 05:09:52 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58267 01:15:10.772 [2024-12-09 05:09:53.018994] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 01:15:11.034 ************************************ 01:15:11.034 END TEST event_scheduler 01:15:11.034 ************************************ 01:15:11.034 01:15:11.034 real 0m3.722s 01:15:11.034 user 0m6.573s 01:15:11.034 sys 0m0.431s 01:15:11.034 05:09:53 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 01:15:11.034 05:09:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 01:15:11.034 05:09:53 event -- event/event.sh@51 -- # modprobe -n nbd 01:15:11.034 05:09:53 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 01:15:11.034 05:09:53 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:15:11.034 05:09:53 event -- common/autotest_common.sh@1111 -- # xtrace_disable 01:15:11.034 05:09:53 event -- common/autotest_common.sh@10 -- # set +x 01:15:11.034 ************************************ 01:15:11.034 START TEST app_repeat 01:15:11.034 ************************************ 01:15:11.034 05:09:53 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 01:15:11.034 05:09:53 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:15:11.034 05:09:53 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:15:11.034 05:09:53 event.app_repeat -- event/event.sh@13 -- # local nbd_list 01:15:11.034 05:09:53 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 01:15:11.034 05:09:53 event.app_repeat -- event/event.sh@14 -- # local bdev_list 01:15:11.034 05:09:53 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 01:15:11.034 05:09:53 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 01:15:11.034 05:09:53 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58356 01:15:11.034 05:09:53 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 01:15:11.034 05:09:53 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 01:15:11.034 Process app_repeat pid: 58356 01:15:11.034 spdk_app_start Round 0 01:15:11.034 05:09:53 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58356' 01:15:11.034 05:09:53 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 01:15:11.034 05:09:53 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 01:15:11.034 05:09:53 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58356 /var/tmp/spdk-nbd.sock 01:15:11.034 05:09:53 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58356 ']' 01:15:11.034 05:09:53 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 01:15:11.034 05:09:53 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 01:15:11.034 05:09:53 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 01:15:11.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 01:15:11.034 05:09:53 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 01:15:11.034 05:09:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 01:15:11.034 [2024-12-09 05:09:53.343458] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:15:11.034 [2024-12-09 05:09:53.343615] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58356 ] 01:15:11.293 [2024-12-09 05:09:53.498756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 01:15:11.293 [2024-12-09 05:09:53.548952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:15:11.293 [2024-12-09 05:09:53.548952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:15:11.293 [2024-12-09 05:09:53.589807] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:15:11.860 05:09:54 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:15:11.860 05:09:54 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 01:15:11.860 05:09:54 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 01:15:12.118 Malloc0 01:15:12.118 05:09:54 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 01:15:12.375 Malloc1 01:15:12.375 05:09:54 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 01:15:12.375 05:09:54 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:15:12.375 05:09:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 01:15:12.375 05:09:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 01:15:12.375 05:09:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:15:12.375 05:09:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 01:15:12.375 05:09:54 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 01:15:12.375 05:09:54 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:15:12.375 05:09:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 01:15:12.375 05:09:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 01:15:12.375 05:09:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:15:12.375 05:09:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 01:15:12.375 05:09:54 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 01:15:12.375 05:09:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 01:15:12.376 05:09:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:15:12.376 05:09:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 01:15:12.634 /dev/nbd0 01:15:12.634 05:09:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 01:15:12.634 05:09:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 01:15:12.634 05:09:54 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 01:15:12.634 05:09:54 event.app_repeat -- common/autotest_common.sh@873 -- # local i 01:15:12.634 05:09:54 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:15:12.634 05:09:54 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:15:12.634 05:09:54 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 01:15:12.634 05:09:54 event.app_repeat -- common/autotest_common.sh@877 -- # break 01:15:12.634 05:09:54 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:15:12.634 05:09:54 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:15:12.634 05:09:54 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 01:15:12.634 1+0 records in 01:15:12.634 1+0 records out 01:15:12.634 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000241249 s, 17.0 MB/s 01:15:12.634 05:09:54 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:15:12.634 05:09:54 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 01:15:12.634 05:09:54 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:15:12.634 05:09:54 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:15:12.634 05:09:54 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 01:15:12.634 05:09:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:15:12.634 05:09:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:15:12.634 05:09:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 01:15:12.892 /dev/nbd1 01:15:12.892 05:09:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 01:15:12.892 05:09:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 01:15:12.892 05:09:55 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 01:15:12.892 05:09:55 event.app_repeat -- common/autotest_common.sh@873 -- # local i 01:15:12.892 05:09:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:15:12.892 05:09:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:15:12.892 05:09:55 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 01:15:12.892 05:09:55 event.app_repeat -- common/autotest_common.sh@877 -- # break 01:15:12.892 05:09:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:15:12.892 05:09:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:15:12.892 05:09:55 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 01:15:12.892 1+0 records in 01:15:12.892 1+0 records out 01:15:12.892 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000303667 s, 13.5 MB/s 01:15:12.892 05:09:55 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:15:12.892 05:09:55 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 01:15:12.892 05:09:55 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:15:12.892 05:09:55 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:15:12.892 05:09:55 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 01:15:12.892 05:09:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:15:12.892 05:09:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:15:12.892 05:09:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 01:15:12.892 05:09:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:15:12.892 05:09:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 01:15:13.149 05:09:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 01:15:13.149 { 01:15:13.149 "nbd_device": "/dev/nbd0", 01:15:13.149 "bdev_name": "Malloc0" 01:15:13.149 }, 01:15:13.149 { 01:15:13.149 "nbd_device": "/dev/nbd1", 01:15:13.149 "bdev_name": "Malloc1" 01:15:13.149 } 01:15:13.149 ]' 01:15:13.149 05:09:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 01:15:13.149 { 01:15:13.149 "nbd_device": "/dev/nbd0", 01:15:13.149 "bdev_name": "Malloc0" 01:15:13.149 }, 01:15:13.149 { 01:15:13.149 "nbd_device": "/dev/nbd1", 01:15:13.149 "bdev_name": "Malloc1" 01:15:13.149 } 01:15:13.149 ]' 01:15:13.149 05:09:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 01:15:13.149 05:09:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 01:15:13.149 /dev/nbd1' 01:15:13.149 05:09:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 01:15:13.149 /dev/nbd1' 01:15:13.149 05:09:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 01:15:13.149 05:09:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 01:15:13.149 05:09:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 01:15:13.149 05:09:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 01:15:13.149 05:09:55 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 01:15:13.149 05:09:55 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 01:15:13.149 05:09:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:15:13.149 05:09:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 01:15:13.149 05:09:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 01:15:13.149 05:09:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 01:15:13.149 05:09:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 01:15:13.149 05:09:55 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 01:15:13.149 256+0 records in 01:15:13.149 256+0 records out 01:15:13.149 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0192243 s, 54.5 MB/s 01:15:13.150 05:09:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:15:13.150 05:09:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 01:15:13.150 256+0 records in 01:15:13.150 256+0 records out 01:15:13.150 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0183067 s, 57.3 MB/s 01:15:13.150 05:09:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:15:13.150 05:09:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 01:15:13.150 256+0 records in 01:15:13.150 256+0 records out 01:15:13.150 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0226759 s, 46.2 MB/s 01:15:13.150 05:09:55 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 01:15:13.150 05:09:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:15:13.150 05:09:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 01:15:13.150 05:09:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 01:15:13.150 05:09:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 01:15:13.150 05:09:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 01:15:13.150 05:09:55 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 01:15:13.150 05:09:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:15:13.150 05:09:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 01:15:13.150 05:09:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:15:13.150 05:09:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 01:15:13.150 05:09:55 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 01:15:13.408 05:09:55 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 01:15:13.408 05:09:55 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:15:13.408 05:09:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:15:13.408 05:09:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 01:15:13.408 05:09:55 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 01:15:13.408 05:09:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:15:13.408 05:09:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 01:15:13.408 05:09:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 01:15:13.408 05:09:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 01:15:13.408 05:09:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 01:15:13.408 05:09:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:15:13.408 05:09:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:15:13.408 05:09:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 01:15:13.668 05:09:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 01:15:13.668 05:09:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 01:15:13.668 05:09:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:15:13.668 05:09:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 01:15:13.668 05:09:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 01:15:13.668 05:09:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 01:15:13.668 05:09:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 01:15:13.668 05:09:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:15:13.668 05:09:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:15:13.668 05:09:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 01:15:13.668 05:09:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 01:15:13.668 05:09:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 01:15:13.668 05:09:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 01:15:13.668 05:09:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:15:13.668 05:09:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 01:15:13.927 05:09:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 01:15:13.927 05:09:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 01:15:13.927 05:09:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 01:15:13.927 05:09:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 01:15:13.927 05:09:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 01:15:13.927 05:09:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 01:15:13.927 05:09:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 01:15:13.927 05:09:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 01:15:13.927 05:09:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 01:15:13.927 05:09:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 01:15:13.927 05:09:56 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 01:15:13.927 05:09:56 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 01:15:13.927 05:09:56 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 01:15:14.186 05:09:56 event.app_repeat -- event/event.sh@35 -- # sleep 3 01:15:14.448 [2024-12-09 05:09:56.763100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 01:15:14.448 [2024-12-09 05:09:56.810589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:15:14.448 [2024-12-09 05:09:56.810588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:15:14.448 [2024-12-09 05:09:56.850399] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:15:14.448 [2024-12-09 05:09:56.850470] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 01:15:14.448 [2024-12-09 05:09:56.850478] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 01:15:17.747 spdk_app_start Round 1 01:15:17.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 01:15:17.747 05:09:59 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 01:15:17.747 05:09:59 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 01:15:17.747 05:09:59 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58356 /var/tmp/spdk-nbd.sock 01:15:17.747 05:09:59 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58356 ']' 01:15:17.747 05:09:59 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 01:15:17.747 05:09:59 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 01:15:17.747 05:09:59 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 01:15:17.747 05:09:59 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 01:15:17.747 05:09:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 01:15:17.747 05:09:59 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:15:17.747 05:09:59 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 01:15:17.747 05:09:59 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 01:15:17.747 Malloc0 01:15:17.747 05:10:00 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 01:15:18.006 Malloc1 01:15:18.006 05:10:00 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 01:15:18.006 05:10:00 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:15:18.006 05:10:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 01:15:18.006 05:10:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 01:15:18.006 05:10:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:15:18.006 05:10:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 01:15:18.006 05:10:00 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 01:15:18.006 05:10:00 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:15:18.006 05:10:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 01:15:18.006 05:10:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 01:15:18.006 05:10:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:15:18.006 05:10:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 01:15:18.006 05:10:00 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 01:15:18.006 05:10:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 01:15:18.006 05:10:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:15:18.006 05:10:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 01:15:18.265 /dev/nbd0 01:15:18.265 05:10:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 01:15:18.265 05:10:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 01:15:18.265 05:10:00 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 01:15:18.265 05:10:00 event.app_repeat -- common/autotest_common.sh@873 -- # local i 01:15:18.265 05:10:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:15:18.265 05:10:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:15:18.265 05:10:00 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 01:15:18.265 05:10:00 event.app_repeat -- common/autotest_common.sh@877 -- # break 01:15:18.265 05:10:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:15:18.265 05:10:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:15:18.265 05:10:00 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 01:15:18.265 1+0 records in 01:15:18.265 1+0 records out 01:15:18.265 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000287196 s, 14.3 MB/s 01:15:18.265 05:10:00 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:15:18.265 05:10:00 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 01:15:18.265 05:10:00 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:15:18.265 05:10:00 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:15:18.265 05:10:00 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 01:15:18.265 05:10:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:15:18.265 05:10:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:15:18.265 05:10:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 01:15:18.523 /dev/nbd1 01:15:18.523 05:10:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 01:15:18.523 05:10:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 01:15:18.523 05:10:00 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 01:15:18.523 05:10:00 event.app_repeat -- common/autotest_common.sh@873 -- # local i 01:15:18.523 05:10:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:15:18.523 05:10:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:15:18.523 05:10:00 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 01:15:18.523 05:10:00 event.app_repeat -- common/autotest_common.sh@877 -- # break 01:15:18.523 05:10:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:15:18.523 05:10:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:15:18.523 05:10:00 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 01:15:18.523 1+0 records in 01:15:18.523 1+0 records out 01:15:18.523 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000273905 s, 15.0 MB/s 01:15:18.523 05:10:00 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:15:18.523 05:10:00 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 01:15:18.523 05:10:00 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:15:18.523 05:10:00 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:15:18.523 05:10:00 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 01:15:18.523 05:10:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:15:18.523 05:10:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:15:18.523 05:10:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 01:15:18.523 05:10:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:15:18.523 05:10:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 01:15:18.782 05:10:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 01:15:18.782 { 01:15:18.782 "nbd_device": "/dev/nbd0", 01:15:18.782 "bdev_name": "Malloc0" 01:15:18.782 }, 01:15:18.782 { 01:15:18.782 "nbd_device": "/dev/nbd1", 01:15:18.782 "bdev_name": "Malloc1" 01:15:18.782 } 01:15:18.782 ]' 01:15:18.782 05:10:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 01:15:18.782 05:10:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 01:15:18.782 { 01:15:18.782 "nbd_device": "/dev/nbd0", 01:15:18.782 "bdev_name": "Malloc0" 01:15:18.782 }, 01:15:18.782 { 01:15:18.782 "nbd_device": "/dev/nbd1", 01:15:18.782 "bdev_name": "Malloc1" 01:15:18.782 } 01:15:18.782 ]' 01:15:18.782 05:10:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 01:15:18.782 /dev/nbd1' 01:15:18.782 05:10:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 01:15:18.782 /dev/nbd1' 01:15:18.782 05:10:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 01:15:18.782 05:10:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 01:15:18.782 05:10:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 01:15:18.782 05:10:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 01:15:18.782 05:10:01 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 01:15:18.782 05:10:01 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 01:15:18.782 05:10:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:15:18.782 05:10:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 01:15:18.782 05:10:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 01:15:18.782 05:10:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 01:15:18.782 05:10:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 01:15:18.782 05:10:01 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 01:15:18.782 256+0 records in 01:15:18.782 256+0 records out 01:15:18.782 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0052876 s, 198 MB/s 01:15:18.782 05:10:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:15:18.782 05:10:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 01:15:18.782 256+0 records in 01:15:18.782 256+0 records out 01:15:18.782 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0244322 s, 42.9 MB/s 01:15:18.782 05:10:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:15:18.782 05:10:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 01:15:18.782 256+0 records in 01:15:18.782 256+0 records out 01:15:18.782 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.024388 s, 43.0 MB/s 01:15:18.782 05:10:01 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 01:15:18.782 05:10:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:15:18.782 05:10:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 01:15:18.782 05:10:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 01:15:18.782 05:10:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 01:15:18.782 05:10:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 01:15:18.782 05:10:01 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 01:15:18.782 05:10:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:15:18.782 05:10:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 01:15:18.782 05:10:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:15:18.782 05:10:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 01:15:18.782 05:10:01 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 01:15:18.782 05:10:01 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 01:15:18.782 05:10:01 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:15:18.782 05:10:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:15:18.782 05:10:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 01:15:18.782 05:10:01 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 01:15:18.782 05:10:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:15:18.782 05:10:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 01:15:19.040 05:10:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 01:15:19.040 05:10:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 01:15:19.040 05:10:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 01:15:19.040 05:10:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:15:19.040 05:10:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:15:19.041 05:10:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 01:15:19.041 05:10:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 01:15:19.041 05:10:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 01:15:19.041 05:10:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:15:19.041 05:10:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 01:15:19.299 05:10:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 01:15:19.299 05:10:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 01:15:19.299 05:10:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 01:15:19.299 05:10:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:15:19.299 05:10:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:15:19.299 05:10:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 01:15:19.299 05:10:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 01:15:19.299 05:10:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 01:15:19.299 05:10:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 01:15:19.299 05:10:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:15:19.299 05:10:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 01:15:19.558 05:10:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 01:15:19.558 05:10:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 01:15:19.558 05:10:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 01:15:19.558 05:10:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 01:15:19.558 05:10:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 01:15:19.558 05:10:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 01:15:19.558 05:10:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 01:15:19.558 05:10:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 01:15:19.558 05:10:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 01:15:19.558 05:10:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 01:15:19.558 05:10:01 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 01:15:19.558 05:10:01 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 01:15:19.558 05:10:01 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 01:15:19.817 05:10:02 event.app_repeat -- event/event.sh@35 -- # sleep 3 01:15:19.817 [2024-12-09 05:10:02.225269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 01:15:20.076 [2024-12-09 05:10:02.278235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:15:20.076 [2024-12-09 05:10:02.278238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:15:20.076 [2024-12-09 05:10:02.319761] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:15:20.076 [2024-12-09 05:10:02.319839] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 01:15:20.076 [2024-12-09 05:10:02.319846] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 01:15:23.364 05:10:05 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 01:15:23.364 spdk_app_start Round 2 01:15:23.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 01:15:23.364 05:10:05 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 01:15:23.364 05:10:05 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58356 /var/tmp/spdk-nbd.sock 01:15:23.364 05:10:05 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58356 ']' 01:15:23.364 05:10:05 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 01:15:23.364 05:10:05 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 01:15:23.364 05:10:05 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 01:15:23.364 05:10:05 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 01:15:23.364 05:10:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 01:15:23.364 05:10:05 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:15:23.364 05:10:05 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 01:15:23.364 05:10:05 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 01:15:23.364 Malloc0 01:15:23.364 05:10:05 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 01:15:23.364 Malloc1 01:15:23.364 05:10:05 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 01:15:23.364 05:10:05 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:15:23.364 05:10:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 01:15:23.364 05:10:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 01:15:23.364 05:10:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:15:23.364 05:10:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 01:15:23.364 05:10:05 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 01:15:23.364 05:10:05 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:15:23.364 05:10:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 01:15:23.364 05:10:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 01:15:23.364 05:10:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:15:23.364 05:10:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 01:15:23.364 05:10:05 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 01:15:23.364 05:10:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 01:15:23.364 05:10:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:15:23.364 05:10:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 01:15:23.621 /dev/nbd0 01:15:23.621 05:10:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 01:15:23.621 05:10:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 01:15:23.621 05:10:05 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 01:15:23.621 05:10:05 event.app_repeat -- common/autotest_common.sh@873 -- # local i 01:15:23.621 05:10:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:15:23.621 05:10:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:15:23.621 05:10:05 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 01:15:23.621 05:10:05 event.app_repeat -- common/autotest_common.sh@877 -- # break 01:15:23.621 05:10:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:15:23.621 05:10:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:15:23.621 05:10:05 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 01:15:23.621 1+0 records in 01:15:23.621 1+0 records out 01:15:23.621 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000545045 s, 7.5 MB/s 01:15:23.621 05:10:05 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:15:23.621 05:10:05 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 01:15:23.621 05:10:05 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:15:23.621 05:10:05 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:15:23.621 05:10:05 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 01:15:23.621 05:10:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:15:23.621 05:10:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:15:23.621 05:10:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 01:15:23.879 /dev/nbd1 01:15:23.879 05:10:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 01:15:23.879 05:10:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 01:15:23.879 05:10:06 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 01:15:23.879 05:10:06 event.app_repeat -- common/autotest_common.sh@873 -- # local i 01:15:23.879 05:10:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 01:15:23.879 05:10:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 01:15:23.879 05:10:06 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 01:15:23.879 05:10:06 event.app_repeat -- common/autotest_common.sh@877 -- # break 01:15:23.879 05:10:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 01:15:23.879 05:10:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 01:15:23.879 05:10:06 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 01:15:23.879 1+0 records in 01:15:23.879 1+0 records out 01:15:23.879 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000467173 s, 8.8 MB/s 01:15:23.879 05:10:06 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:15:23.879 05:10:06 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 01:15:23.879 05:10:06 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 01:15:23.879 05:10:06 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 01:15:23.879 05:10:06 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 01:15:23.879 05:10:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 01:15:23.879 05:10:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 01:15:23.879 05:10:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 01:15:23.879 05:10:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:15:23.879 05:10:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 01:15:24.136 05:10:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 01:15:24.136 { 01:15:24.136 "nbd_device": "/dev/nbd0", 01:15:24.136 "bdev_name": "Malloc0" 01:15:24.136 }, 01:15:24.136 { 01:15:24.136 "nbd_device": "/dev/nbd1", 01:15:24.136 "bdev_name": "Malloc1" 01:15:24.136 } 01:15:24.136 ]' 01:15:24.136 05:10:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 01:15:24.136 { 01:15:24.136 "nbd_device": "/dev/nbd0", 01:15:24.136 "bdev_name": "Malloc0" 01:15:24.136 }, 01:15:24.136 { 01:15:24.136 "nbd_device": "/dev/nbd1", 01:15:24.136 "bdev_name": "Malloc1" 01:15:24.136 } 01:15:24.136 ]' 01:15:24.136 05:10:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 01:15:24.137 05:10:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 01:15:24.137 /dev/nbd1' 01:15:24.137 05:10:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 01:15:24.137 /dev/nbd1' 01:15:24.137 05:10:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 01:15:24.137 05:10:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 01:15:24.137 05:10:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 01:15:24.137 05:10:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 01:15:24.137 05:10:06 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 01:15:24.137 05:10:06 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 01:15:24.137 05:10:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:15:24.137 05:10:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 01:15:24.137 05:10:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 01:15:24.137 05:10:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 01:15:24.137 05:10:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 01:15:24.137 05:10:06 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 01:15:24.137 256+0 records in 01:15:24.137 256+0 records out 01:15:24.137 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0137712 s, 76.1 MB/s 01:15:24.137 05:10:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:15:24.137 05:10:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 01:15:24.137 256+0 records in 01:15:24.137 256+0 records out 01:15:24.137 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0250017 s, 41.9 MB/s 01:15:24.137 05:10:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 01:15:24.137 05:10:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 01:15:24.393 256+0 records in 01:15:24.393 256+0 records out 01:15:24.393 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0269144 s, 39.0 MB/s 01:15:24.393 05:10:06 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 01:15:24.393 05:10:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:15:24.393 05:10:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 01:15:24.393 05:10:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 01:15:24.393 05:10:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 01:15:24.393 05:10:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 01:15:24.393 05:10:06 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 01:15:24.393 05:10:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:15:24.393 05:10:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 01:15:24.393 05:10:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 01:15:24.393 05:10:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 01:15:24.393 05:10:06 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 01:15:24.393 05:10:06 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 01:15:24.393 05:10:06 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:15:24.393 05:10:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 01:15:24.393 05:10:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 01:15:24.393 05:10:06 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 01:15:24.393 05:10:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:15:24.393 05:10:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 01:15:24.393 05:10:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 01:15:24.393 05:10:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 01:15:24.393 05:10:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 01:15:24.393 05:10:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:15:24.393 05:10:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:15:24.393 05:10:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 01:15:24.393 05:10:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 01:15:24.393 05:10:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 01:15:24.393 05:10:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 01:15:24.393 05:10:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 01:15:24.651 05:10:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 01:15:24.651 05:10:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 01:15:24.651 05:10:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 01:15:24.651 05:10:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 01:15:24.651 05:10:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 01:15:24.651 05:10:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 01:15:24.651 05:10:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 01:15:24.651 05:10:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 01:15:24.651 05:10:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 01:15:24.651 05:10:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 01:15:24.651 05:10:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 01:15:24.909 05:10:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 01:15:24.909 05:10:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 01:15:24.909 05:10:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 01:15:24.909 05:10:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 01:15:24.909 05:10:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 01:15:24.909 05:10:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 01:15:24.909 05:10:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 01:15:24.909 05:10:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 01:15:24.909 05:10:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 01:15:24.909 05:10:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 01:15:24.909 05:10:07 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 01:15:24.909 05:10:07 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 01:15:24.909 05:10:07 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 01:15:25.167 05:10:07 event.app_repeat -- event/event.sh@35 -- # sleep 3 01:15:25.425 [2024-12-09 05:10:07.660559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 01:15:25.425 [2024-12-09 05:10:07.714733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:15:25.425 [2024-12-09 05:10:07.714736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:15:25.425 [2024-12-09 05:10:07.755213] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:15:25.425 [2024-12-09 05:10:07.755283] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 01:15:25.425 [2024-12-09 05:10:07.755290] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 01:15:28.706 05:10:10 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58356 /var/tmp/spdk-nbd.sock 01:15:28.706 05:10:10 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58356 ']' 01:15:28.706 05:10:10 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 01:15:28.706 05:10:10 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 01:15:28.706 05:10:10 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 01:15:28.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 01:15:28.706 05:10:10 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 01:15:28.706 05:10:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 01:15:28.706 05:10:10 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:15:28.707 05:10:10 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 01:15:28.707 05:10:10 event.app_repeat -- event/event.sh@39 -- # killprocess 58356 01:15:28.707 05:10:10 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58356 ']' 01:15:28.707 05:10:10 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58356 01:15:28.707 05:10:10 event.app_repeat -- common/autotest_common.sh@959 -- # uname 01:15:28.707 05:10:10 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:15:28.707 05:10:10 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58356 01:15:28.707 05:10:10 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:15:28.707 05:10:10 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:15:28.707 05:10:10 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58356' 01:15:28.707 killing process with pid 58356 01:15:28.707 05:10:10 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58356 01:15:28.707 05:10:10 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58356 01:15:28.707 spdk_app_start is called in Round 0. 01:15:28.707 Shutdown signal received, stop current app iteration 01:15:28.707 Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 reinitialization... 01:15:28.707 spdk_app_start is called in Round 1. 01:15:28.707 Shutdown signal received, stop current app iteration 01:15:28.707 Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 reinitialization... 01:15:28.707 spdk_app_start is called in Round 2. 01:15:28.707 Shutdown signal received, stop current app iteration 01:15:28.707 Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 reinitialization... 01:15:28.707 spdk_app_start is called in Round 3. 01:15:28.707 Shutdown signal received, stop current app iteration 01:15:28.707 05:10:10 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 01:15:28.707 05:10:10 event.app_repeat -- event/event.sh@42 -- # return 0 01:15:28.707 01:15:28.707 real 0m17.634s 01:15:28.707 user 0m39.309s 01:15:28.707 sys 0m2.447s 01:15:28.707 05:10:10 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 01:15:28.707 ************************************ 01:15:28.707 END TEST app_repeat 01:15:28.707 ************************************ 01:15:28.707 05:10:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 01:15:28.707 05:10:10 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 01:15:28.707 05:10:10 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 01:15:28.707 05:10:10 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:15:28.707 05:10:10 event -- common/autotest_common.sh@1111 -- # xtrace_disable 01:15:28.707 05:10:10 event -- common/autotest_common.sh@10 -- # set +x 01:15:28.707 ************************************ 01:15:28.707 START TEST cpu_locks 01:15:28.707 ************************************ 01:15:28.707 05:10:10 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 01:15:28.707 * Looking for test storage... 01:15:28.707 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 01:15:28.707 05:10:11 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:15:28.707 05:10:11 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 01:15:28.707 05:10:11 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:15:28.967 05:10:11 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:15:28.967 05:10:11 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:15:28.967 05:10:11 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 01:15:28.967 05:10:11 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 01:15:28.967 05:10:11 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 01:15:28.967 05:10:11 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 01:15:28.967 05:10:11 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 01:15:28.967 05:10:11 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 01:15:28.967 05:10:11 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 01:15:28.967 05:10:11 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 01:15:28.967 05:10:11 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 01:15:28.967 05:10:11 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:15:28.968 05:10:11 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 01:15:28.968 05:10:11 event.cpu_locks -- scripts/common.sh@345 -- # : 1 01:15:28.968 05:10:11 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 01:15:28.968 05:10:11 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:15:28.968 05:10:11 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 01:15:28.968 05:10:11 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 01:15:28.968 05:10:11 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:15:28.968 05:10:11 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 01:15:28.968 05:10:11 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 01:15:28.968 05:10:11 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 01:15:28.968 05:10:11 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 01:15:28.968 05:10:11 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:15:28.968 05:10:11 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 01:15:28.968 05:10:11 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 01:15:28.968 05:10:11 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:15:28.968 05:10:11 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:15:28.968 05:10:11 event.cpu_locks -- scripts/common.sh@368 -- # return 0 01:15:28.968 05:10:11 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:15:28.968 05:10:11 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:15:28.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:28.968 --rc genhtml_branch_coverage=1 01:15:28.968 --rc genhtml_function_coverage=1 01:15:28.968 --rc genhtml_legend=1 01:15:28.968 --rc geninfo_all_blocks=1 01:15:28.968 --rc geninfo_unexecuted_blocks=1 01:15:28.968 01:15:28.968 ' 01:15:28.968 05:10:11 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:15:28.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:28.968 --rc genhtml_branch_coverage=1 01:15:28.968 --rc genhtml_function_coverage=1 01:15:28.968 --rc genhtml_legend=1 01:15:28.968 --rc geninfo_all_blocks=1 01:15:28.968 --rc geninfo_unexecuted_blocks=1 01:15:28.968 01:15:28.968 ' 01:15:28.968 05:10:11 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:15:28.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:28.968 --rc genhtml_branch_coverage=1 01:15:28.968 --rc genhtml_function_coverage=1 01:15:28.968 --rc genhtml_legend=1 01:15:28.968 --rc geninfo_all_blocks=1 01:15:28.968 --rc geninfo_unexecuted_blocks=1 01:15:28.968 01:15:28.968 ' 01:15:28.968 05:10:11 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:15:28.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:28.968 --rc genhtml_branch_coverage=1 01:15:28.968 --rc genhtml_function_coverage=1 01:15:28.968 --rc genhtml_legend=1 01:15:28.968 --rc geninfo_all_blocks=1 01:15:28.968 --rc geninfo_unexecuted_blocks=1 01:15:28.968 01:15:28.968 ' 01:15:28.968 05:10:11 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 01:15:28.968 05:10:11 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 01:15:28.968 05:10:11 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 01:15:28.968 05:10:11 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 01:15:28.968 05:10:11 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:15:28.968 05:10:11 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 01:15:28.968 05:10:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 01:15:28.968 ************************************ 01:15:28.968 START TEST default_locks 01:15:28.968 ************************************ 01:15:28.968 05:10:11 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 01:15:28.968 05:10:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58780 01:15:28.968 05:10:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58780 01:15:28.968 05:10:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 01:15:28.968 05:10:11 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58780 ']' 01:15:28.968 05:10:11 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:15:28.968 05:10:11 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 01:15:28.968 05:10:11 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:15:28.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:15:28.968 05:10:11 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 01:15:28.968 05:10:11 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 01:15:28.968 [2024-12-09 05:10:11.290667] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:15:28.968 [2024-12-09 05:10:11.290740] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58780 ] 01:15:29.228 [2024-12-09 05:10:11.443583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:15:29.228 [2024-12-09 05:10:11.496928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:15:29.228 [2024-12-09 05:10:11.551871] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:15:29.798 05:10:12 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:15:29.798 05:10:12 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 01:15:29.798 05:10:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58780 01:15:29.798 05:10:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58780 01:15:29.798 05:10:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 01:15:30.058 05:10:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58780 01:15:30.058 05:10:12 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58780 ']' 01:15:30.058 05:10:12 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58780 01:15:30.058 05:10:12 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 01:15:30.058 05:10:12 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:15:30.058 05:10:12 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58780 01:15:30.058 killing process with pid 58780 01:15:30.058 05:10:12 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:15:30.058 05:10:12 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:15:30.058 05:10:12 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58780' 01:15:30.058 05:10:12 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58780 01:15:30.058 05:10:12 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58780 01:15:30.629 05:10:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58780 01:15:30.629 05:10:13 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 01:15:30.629 05:10:13 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58780 01:15:30.629 05:10:13 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 01:15:30.629 05:10:13 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:15:30.629 05:10:13 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 01:15:30.629 05:10:13 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:15:30.629 05:10:13 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58780 01:15:30.629 05:10:13 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58780 ']' 01:15:30.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:15:30.629 ERROR: process (pid: 58780) is no longer running 01:15:30.629 05:10:13 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:15:30.629 05:10:13 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 01:15:30.629 05:10:13 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:15:30.629 05:10:13 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 01:15:30.629 05:10:13 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 01:15:30.629 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58780) - No such process 01:15:30.629 05:10:13 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:15:30.629 05:10:13 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 01:15:30.629 05:10:13 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 01:15:30.629 05:10:13 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:15:30.629 05:10:13 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:15:30.629 05:10:13 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:15:30.629 05:10:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 01:15:30.629 05:10:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 01:15:30.629 05:10:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 01:15:30.629 05:10:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 01:15:30.629 01:15:30.629 real 0m1.836s 01:15:30.629 user 0m1.884s 01:15:30.629 sys 0m0.452s 01:15:30.629 05:10:13 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 01:15:30.629 05:10:13 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 01:15:30.629 ************************************ 01:15:30.629 END TEST default_locks 01:15:30.629 ************************************ 01:15:30.889 05:10:13 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 01:15:30.889 05:10:13 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:15:30.889 05:10:13 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 01:15:30.889 05:10:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 01:15:30.889 ************************************ 01:15:30.889 START TEST default_locks_via_rpc 01:15:30.889 ************************************ 01:15:30.890 05:10:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 01:15:30.890 05:10:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58832 01:15:30.890 05:10:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 01:15:30.890 05:10:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58832 01:15:30.890 05:10:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58832 ']' 01:15:30.890 05:10:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:15:30.890 05:10:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 01:15:30.890 05:10:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:15:30.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:15:30.890 05:10:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 01:15:30.890 05:10:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:15:30.890 [2024-12-09 05:10:13.194292] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:15:30.890 [2024-12-09 05:10:13.194453] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58832 ] 01:15:31.157 [2024-12-09 05:10:13.344601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:15:31.157 [2024-12-09 05:10:13.420873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:15:31.157 [2024-12-09 05:10:13.521438] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:15:31.762 05:10:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:15:31.762 05:10:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 01:15:31.762 05:10:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 01:15:31.762 05:10:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:31.762 05:10:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:15:31.762 05:10:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:31.762 05:10:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 01:15:31.762 05:10:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 01:15:31.762 05:10:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 01:15:31.762 05:10:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 01:15:31.763 05:10:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 01:15:31.763 05:10:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:31.763 05:10:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:15:31.763 05:10:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:31.763 05:10:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58832 01:15:31.763 05:10:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 01:15:31.763 05:10:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58832 01:15:32.022 05:10:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58832 01:15:32.022 05:10:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58832 ']' 01:15:32.022 05:10:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58832 01:15:32.022 05:10:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 01:15:32.022 05:10:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:15:32.022 05:10:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58832 01:15:32.282 killing process with pid 58832 01:15:32.282 05:10:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:15:32.282 05:10:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:15:32.282 05:10:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58832' 01:15:32.282 05:10:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58832 01:15:32.282 05:10:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58832 01:15:32.849 01:15:32.849 real 0m1.951s 01:15:32.849 user 0m1.865s 01:15:32.849 sys 0m0.630s 01:15:32.849 05:10:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 01:15:32.849 ************************************ 01:15:32.849 END TEST default_locks_via_rpc 01:15:32.849 ************************************ 01:15:32.849 05:10:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:15:32.849 05:10:15 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 01:15:32.849 05:10:15 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:15:32.849 05:10:15 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 01:15:32.849 05:10:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 01:15:32.849 ************************************ 01:15:32.849 START TEST non_locking_app_on_locked_coremask 01:15:32.849 ************************************ 01:15:32.849 05:10:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 01:15:32.849 05:10:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58883 01:15:32.850 05:10:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 01:15:32.850 05:10:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58883 /var/tmp/spdk.sock 01:15:32.850 05:10:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58883 ']' 01:15:32.850 05:10:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:15:32.850 05:10:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 01:15:32.850 05:10:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:15:32.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:15:32.850 05:10:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 01:15:32.850 05:10:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 01:15:32.850 [2024-12-09 05:10:15.215298] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:15:32.850 [2024-12-09 05:10:15.215484] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58883 ] 01:15:33.109 [2024-12-09 05:10:15.367388] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:15:33.109 [2024-12-09 05:10:15.444112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:15:33.109 [2024-12-09 05:10:15.544824] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:15:33.678 05:10:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:15:33.678 05:10:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 01:15:33.678 05:10:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58899 01:15:33.678 05:10:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 01:15:33.678 05:10:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58899 /var/tmp/spdk2.sock 01:15:33.678 05:10:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58899 ']' 01:15:33.678 05:10:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 01:15:33.678 05:10:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 01:15:33.678 05:10:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 01:15:33.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 01:15:33.678 05:10:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 01:15:33.678 05:10:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 01:15:33.938 [2024-12-09 05:10:16.141481] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:15:33.938 [2024-12-09 05:10:16.141635] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58899 ] 01:15:33.938 [2024-12-09 05:10:16.289288] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 01:15:33.938 [2024-12-09 05:10:16.293383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:15:34.197 [2024-12-09 05:10:16.446074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:15:34.457 [2024-12-09 05:10:16.654505] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:15:34.717 05:10:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:15:34.717 05:10:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 01:15:34.717 05:10:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58883 01:15:34.717 05:10:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 01:15:34.717 05:10:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58883 01:15:35.655 05:10:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58883 01:15:35.655 05:10:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58883 ']' 01:15:35.655 05:10:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58883 01:15:35.655 05:10:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 01:15:35.655 05:10:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:15:35.655 05:10:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58883 01:15:35.655 killing process with pid 58883 01:15:35.655 05:10:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:15:35.655 05:10:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:15:35.655 05:10:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58883' 01:15:35.655 05:10:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58883 01:15:35.655 05:10:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58883 01:15:36.592 05:10:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58899 01:15:36.592 05:10:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58899 ']' 01:15:36.592 05:10:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58899 01:15:36.592 05:10:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 01:15:36.592 05:10:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:15:36.592 05:10:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58899 01:15:36.592 killing process with pid 58899 01:15:36.592 05:10:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:15:36.592 05:10:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:15:36.592 05:10:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58899' 01:15:36.592 05:10:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58899 01:15:36.592 05:10:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58899 01:15:36.883 ************************************ 01:15:36.883 END TEST non_locking_app_on_locked_coremask 01:15:36.883 ************************************ 01:15:36.883 01:15:36.883 real 0m3.926s 01:15:36.883 user 0m3.947s 01:15:36.883 sys 0m1.329s 01:15:36.883 05:10:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 01:15:36.883 05:10:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 01:15:36.883 05:10:19 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 01:15:36.883 05:10:19 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:15:36.883 05:10:19 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 01:15:36.883 05:10:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 01:15:36.883 ************************************ 01:15:36.883 START TEST locking_app_on_unlocked_coremask 01:15:36.883 ************************************ 01:15:36.883 05:10:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 01:15:36.883 05:10:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=58967 01:15:36.883 05:10:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 01:15:36.883 05:10:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 58967 /var/tmp/spdk.sock 01:15:36.883 05:10:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58967 ']' 01:15:36.883 05:10:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:15:36.883 05:10:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 01:15:36.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:15:36.883 05:10:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:15:36.883 05:10:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 01:15:36.883 05:10:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 01:15:36.883 [2024-12-09 05:10:19.208652] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:15:36.883 [2024-12-09 05:10:19.208724] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58967 ] 01:15:37.143 [2024-12-09 05:10:19.362447] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 01:15:37.143 [2024-12-09 05:10:19.362493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:15:37.143 [2024-12-09 05:10:19.415028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:15:37.143 [2024-12-09 05:10:19.470106] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:15:37.713 05:10:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:15:37.713 05:10:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 01:15:37.713 05:10:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=58983 01:15:37.713 05:10:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 01:15:37.713 05:10:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 58983 /var/tmp/spdk2.sock 01:15:37.713 05:10:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58983 ']' 01:15:37.713 05:10:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 01:15:37.713 05:10:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 01:15:37.713 05:10:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 01:15:37.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 01:15:37.713 05:10:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 01:15:37.713 05:10:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 01:15:37.713 [2024-12-09 05:10:20.128911] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:15:37.713 [2024-12-09 05:10:20.129071] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58983 ] 01:15:37.973 [2024-12-09 05:10:20.278252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:15:37.973 [2024-12-09 05:10:20.385192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:15:38.233 [2024-12-09 05:10:20.498725] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:15:38.801 05:10:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:15:38.801 05:10:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 01:15:38.801 05:10:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 58983 01:15:38.801 05:10:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58983 01:15:38.801 05:10:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 01:15:39.370 05:10:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 58967 01:15:39.370 05:10:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58967 ']' 01:15:39.370 05:10:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 58967 01:15:39.370 05:10:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 01:15:39.370 05:10:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:15:39.370 05:10:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58967 01:15:39.629 killing process with pid 58967 01:15:39.629 05:10:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:15:39.629 05:10:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:15:39.629 05:10:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58967' 01:15:39.629 05:10:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 58967 01:15:39.629 05:10:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 58967 01:15:40.199 05:10:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 58983 01:15:40.199 05:10:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58983 ']' 01:15:40.199 05:10:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 58983 01:15:40.199 05:10:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 01:15:40.199 05:10:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:15:40.199 05:10:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58983 01:15:40.199 05:10:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:15:40.199 05:10:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:15:40.199 05:10:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58983' 01:15:40.199 killing process with pid 58983 01:15:40.199 05:10:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 58983 01:15:40.199 05:10:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 58983 01:15:40.768 ************************************ 01:15:40.768 END TEST locking_app_on_unlocked_coremask 01:15:40.768 ************************************ 01:15:40.768 01:15:40.768 real 0m3.771s 01:15:40.768 user 0m4.081s 01:15:40.768 sys 0m1.043s 01:15:40.768 05:10:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 01:15:40.768 05:10:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 01:15:40.768 05:10:22 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 01:15:40.768 05:10:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:15:40.768 05:10:22 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 01:15:40.768 05:10:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 01:15:40.768 ************************************ 01:15:40.768 START TEST locking_app_on_locked_coremask 01:15:40.768 ************************************ 01:15:40.768 05:10:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 01:15:40.768 05:10:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59043 01:15:40.768 05:10:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 01:15:40.769 05:10:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59043 /var/tmp/spdk.sock 01:15:40.769 05:10:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59043 ']' 01:15:40.769 05:10:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:15:40.769 05:10:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 01:15:40.769 05:10:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:15:40.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:15:40.769 05:10:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 01:15:40.769 05:10:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 01:15:40.769 [2024-12-09 05:10:23.036616] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:15:40.769 [2024-12-09 05:10:23.036738] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59043 ] 01:15:40.769 [2024-12-09 05:10:23.187728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:15:41.028 [2024-12-09 05:10:23.238584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:15:41.028 [2024-12-09 05:10:23.293146] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:15:41.597 05:10:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:15:41.597 05:10:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 01:15:41.597 05:10:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59055 01:15:41.597 05:10:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59055 /var/tmp/spdk2.sock 01:15:41.597 05:10:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 01:15:41.597 05:10:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59055 /var/tmp/spdk2.sock 01:15:41.597 05:10:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 01:15:41.597 05:10:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 01:15:41.597 05:10:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:15:41.597 05:10:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 01:15:41.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 01:15:41.597 05:10:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:15:41.597 05:10:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59055 /var/tmp/spdk2.sock 01:15:41.597 05:10:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59055 ']' 01:15:41.597 05:10:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 01:15:41.597 05:10:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 01:15:41.597 05:10:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 01:15:41.597 05:10:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 01:15:41.597 05:10:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 01:15:41.597 [2024-12-09 05:10:23.989081] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:15:41.597 [2024-12-09 05:10:23.989671] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59055 ] 01:15:41.858 [2024-12-09 05:10:24.139483] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59043 has claimed it. 01:15:41.858 [2024-12-09 05:10:24.139534] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 01:15:42.435 ERROR: process (pid: 59055) is no longer running 01:15:42.435 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59055) - No such process 01:15:42.435 05:10:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:15:42.435 05:10:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 01:15:42.435 05:10:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 01:15:42.436 05:10:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:15:42.436 05:10:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:15:42.436 05:10:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:15:42.436 05:10:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59043 01:15:42.436 05:10:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59043 01:15:42.436 05:10:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 01:15:42.706 05:10:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59043 01:15:42.706 05:10:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59043 ']' 01:15:42.706 05:10:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59043 01:15:42.706 05:10:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 01:15:42.706 05:10:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:15:42.706 05:10:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59043 01:15:42.706 05:10:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:15:42.706 05:10:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:15:42.706 05:10:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59043' 01:15:42.706 killing process with pid 59043 01:15:42.706 05:10:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59043 01:15:42.706 05:10:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59043 01:15:43.273 01:15:43.273 real 0m2.519s 01:15:43.273 user 0m2.864s 01:15:43.273 sys 0m0.611s 01:15:43.273 05:10:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 01:15:43.273 05:10:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 01:15:43.273 ************************************ 01:15:43.273 END TEST locking_app_on_locked_coremask 01:15:43.273 ************************************ 01:15:43.273 05:10:25 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 01:15:43.273 05:10:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:15:43.273 05:10:25 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 01:15:43.273 05:10:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 01:15:43.273 ************************************ 01:15:43.273 START TEST locking_overlapped_coremask 01:15:43.273 ************************************ 01:15:43.273 05:10:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 01:15:43.273 05:10:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59106 01:15:43.273 05:10:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 01:15:43.273 05:10:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59106 /var/tmp/spdk.sock 01:15:43.273 05:10:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59106 ']' 01:15:43.273 05:10:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:15:43.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:15:43.274 05:10:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 01:15:43.274 05:10:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:15:43.274 05:10:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 01:15:43.274 05:10:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 01:15:43.274 [2024-12-09 05:10:25.614297] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:15:43.274 [2024-12-09 05:10:25.614374] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59106 ] 01:15:43.533 [2024-12-09 05:10:25.769089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 01:15:43.533 [2024-12-09 05:10:25.826919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:15:43.533 [2024-12-09 05:10:25.827142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:15:43.533 [2024-12-09 05:10:25.827145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:15:43.533 [2024-12-09 05:10:25.883250] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:15:44.100 05:10:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:15:44.100 05:10:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 01:15:44.100 05:10:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59124 01:15:44.100 05:10:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 01:15:44.100 05:10:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59124 /var/tmp/spdk2.sock 01:15:44.100 05:10:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 01:15:44.100 05:10:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59124 /var/tmp/spdk2.sock 01:15:44.100 05:10:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 01:15:44.100 05:10:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:15:44.100 05:10:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 01:15:44.100 05:10:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:15:44.100 05:10:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59124 /var/tmp/spdk2.sock 01:15:44.100 05:10:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59124 ']' 01:15:44.100 05:10:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 01:15:44.100 05:10:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 01:15:44.100 05:10:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 01:15:44.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 01:15:44.100 05:10:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 01:15:44.100 05:10:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 01:15:44.358 [2024-12-09 05:10:26.558639] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:15:44.358 [2024-12-09 05:10:26.558814] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59124 ] 01:15:44.358 [2024-12-09 05:10:26.708778] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59106 has claimed it. 01:15:44.358 [2024-12-09 05:10:26.708859] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 01:15:44.925 ERROR: process (pid: 59124) is no longer running 01:15:44.925 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59124) - No such process 01:15:44.925 05:10:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:15:44.925 05:10:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 01:15:44.925 05:10:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 01:15:44.925 05:10:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:15:44.925 05:10:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:15:44.925 05:10:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:15:44.925 05:10:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 01:15:44.925 05:10:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 01:15:44.925 05:10:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 01:15:44.925 05:10:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 01:15:44.925 05:10:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59106 01:15:44.925 05:10:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59106 ']' 01:15:44.925 05:10:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59106 01:15:44.925 05:10:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 01:15:44.925 05:10:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:15:44.925 05:10:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59106 01:15:44.925 05:10:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:15:44.925 05:10:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:15:44.925 05:10:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59106' 01:15:44.925 killing process with pid 59106 01:15:44.925 05:10:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59106 01:15:44.925 05:10:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59106 01:15:45.491 01:15:45.491 real 0m2.353s 01:15:45.491 user 0m6.525s 01:15:45.491 sys 0m0.404s 01:15:45.491 05:10:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 01:15:45.491 05:10:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 01:15:45.491 ************************************ 01:15:45.491 END TEST locking_overlapped_coremask 01:15:45.491 ************************************ 01:15:45.750 05:10:27 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 01:15:45.750 05:10:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:15:45.750 05:10:27 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 01:15:45.750 05:10:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 01:15:45.750 ************************************ 01:15:45.750 START TEST locking_overlapped_coremask_via_rpc 01:15:45.750 ************************************ 01:15:45.750 05:10:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 01:15:45.750 05:10:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59164 01:15:45.750 05:10:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 01:15:45.750 05:10:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59164 /var/tmp/spdk.sock 01:15:45.750 05:10:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59164 ']' 01:15:45.750 05:10:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:15:45.750 05:10:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 01:15:45.750 05:10:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:15:45.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:15:45.750 05:10:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 01:15:45.750 05:10:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:15:45.750 [2024-12-09 05:10:28.036276] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:15:45.750 [2024-12-09 05:10:28.036352] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59164 ] 01:15:45.750 [2024-12-09 05:10:28.185141] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 01:15:45.750 [2024-12-09 05:10:28.185185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 01:15:46.009 [2024-12-09 05:10:28.265828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:15:46.009 [2024-12-09 05:10:28.265917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:15:46.009 [2024-12-09 05:10:28.265918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:15:46.009 [2024-12-09 05:10:28.366759] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:15:46.577 05:10:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:15:46.577 05:10:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 01:15:46.577 05:10:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59182 01:15:46.577 05:10:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 01:15:46.577 05:10:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59182 /var/tmp/spdk2.sock 01:15:46.577 05:10:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59182 ']' 01:15:46.577 05:10:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 01:15:46.577 05:10:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 01:15:46.577 05:10:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 01:15:46.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 01:15:46.577 05:10:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 01:15:46.577 05:10:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:15:46.577 [2024-12-09 05:10:29.001074] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:15:46.577 [2024-12-09 05:10:29.001281] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59182 ] 01:15:46.836 [2024-12-09 05:10:29.158247] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 01:15:46.836 [2024-12-09 05:10:29.162348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 01:15:46.836 [2024-12-09 05:10:29.275365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:15:46.836 [2024-12-09 05:10:29.282447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:15:46.836 [2024-12-09 05:10:29.282451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 01:15:47.095 [2024-12-09 05:10:29.392629] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:15:47.666 05:10:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:15:47.666 05:10:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 01:15:47.666 05:10:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 01:15:47.666 05:10:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:47.666 05:10:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:15:47.666 05:10:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:47.666 05:10:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 01:15:47.666 05:10:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 01:15:47.666 05:10:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 01:15:47.666 05:10:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:15:47.666 05:10:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:15:47.666 05:10:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:15:47.666 05:10:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:15:47.666 05:10:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 01:15:47.666 05:10:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:47.666 05:10:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:15:47.666 [2024-12-09 05:10:29.921545] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59164 has claimed it. 01:15:47.666 request: 01:15:47.666 { 01:15:47.666 "method": "framework_enable_cpumask_locks", 01:15:47.666 "req_id": 1 01:15:47.666 } 01:15:47.666 Got JSON-RPC error response 01:15:47.666 response: 01:15:47.666 { 01:15:47.666 "code": -32603, 01:15:47.666 "message": "Failed to claim CPU core: 2" 01:15:47.666 } 01:15:47.666 05:10:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:15:47.666 05:10:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 01:15:47.666 05:10:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:15:47.666 05:10:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:15:47.666 05:10:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:15:47.666 05:10:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59164 /var/tmp/spdk.sock 01:15:47.666 05:10:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59164 ']' 01:15:47.666 05:10:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:15:47.666 05:10:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 01:15:47.666 05:10:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:15:47.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:15:47.666 05:10:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 01:15:47.666 05:10:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:15:47.928 05:10:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:15:47.928 05:10:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 01:15:47.928 05:10:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59182 /var/tmp/spdk2.sock 01:15:47.928 05:10:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59182 ']' 01:15:47.928 05:10:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 01:15:47.928 05:10:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 01:15:47.928 05:10:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 01:15:47.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 01:15:47.928 05:10:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 01:15:47.928 05:10:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:15:48.188 05:10:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:15:48.188 05:10:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 01:15:48.188 05:10:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 01:15:48.188 05:10:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 01:15:48.188 05:10:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 01:15:48.188 05:10:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 01:15:48.188 01:15:48.188 real 0m2.430s 01:15:48.188 user 0m1.183s 01:15:48.188 sys 0m0.169s 01:15:48.188 05:10:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 01:15:48.188 05:10:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 01:15:48.188 ************************************ 01:15:48.188 END TEST locking_overlapped_coremask_via_rpc 01:15:48.188 ************************************ 01:15:48.188 05:10:30 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 01:15:48.188 05:10:30 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59164 ]] 01:15:48.188 05:10:30 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59164 01:15:48.188 05:10:30 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59164 ']' 01:15:48.188 05:10:30 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59164 01:15:48.188 05:10:30 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 01:15:48.188 05:10:30 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:15:48.188 05:10:30 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59164 01:15:48.188 killing process with pid 59164 01:15:48.188 05:10:30 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:15:48.188 05:10:30 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:15:48.188 05:10:30 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59164' 01:15:48.188 05:10:30 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59164 01:15:48.188 05:10:30 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59164 01:15:48.758 05:10:31 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59182 ]] 01:15:48.758 05:10:31 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59182 01:15:48.758 05:10:31 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59182 ']' 01:15:48.758 05:10:31 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59182 01:15:48.758 05:10:31 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 01:15:48.758 05:10:31 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:15:48.758 05:10:31 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59182 01:15:48.758 05:10:31 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 01:15:48.758 killing process with pid 59182 01:15:48.758 05:10:31 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 01:15:48.758 05:10:31 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59182' 01:15:48.758 05:10:31 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59182 01:15:48.758 05:10:31 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59182 01:15:49.328 05:10:31 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 01:15:49.328 Process with pid 59164 is not found 01:15:49.328 05:10:31 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 01:15:49.328 05:10:31 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59164 ]] 01:15:49.328 05:10:31 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59164 01:15:49.328 05:10:31 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59164 ']' 01:15:49.328 05:10:31 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59164 01:15:49.328 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59164) - No such process 01:15:49.328 05:10:31 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59164 is not found' 01:15:49.328 05:10:31 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59182 ]] 01:15:49.328 05:10:31 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59182 01:15:49.328 Process with pid 59182 is not found 01:15:49.328 05:10:31 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59182 ']' 01:15:49.328 05:10:31 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59182 01:15:49.328 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59182) - No such process 01:15:49.328 05:10:31 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59182 is not found' 01:15:49.328 05:10:31 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 01:15:49.328 01:15:49.328 real 0m20.535s 01:15:49.328 user 0m35.332s 01:15:49.328 sys 0m5.744s 01:15:49.328 ************************************ 01:15:49.328 END TEST cpu_locks 01:15:49.328 ************************************ 01:15:49.328 05:10:31 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 01:15:49.328 05:10:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 01:15:49.328 ************************************ 01:15:49.328 END TEST event 01:15:49.328 ************************************ 01:15:49.328 01:15:49.328 real 0m46.445s 01:15:49.328 user 1m27.941s 01:15:49.328 sys 0m9.131s 01:15:49.328 05:10:31 event -- common/autotest_common.sh@1130 -- # xtrace_disable 01:15:49.328 05:10:31 event -- common/autotest_common.sh@10 -- # set +x 01:15:49.328 05:10:31 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 01:15:49.328 05:10:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:15:49.328 05:10:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:15:49.328 05:10:31 -- common/autotest_common.sh@10 -- # set +x 01:15:49.328 ************************************ 01:15:49.328 START TEST thread 01:15:49.328 ************************************ 01:15:49.328 05:10:31 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 01:15:49.328 * Looking for test storage... 01:15:49.328 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 01:15:49.328 05:10:31 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:15:49.328 05:10:31 thread -- common/autotest_common.sh@1693 -- # lcov --version 01:15:49.328 05:10:31 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:15:49.589 05:10:31 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:15:49.589 05:10:31 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:15:49.589 05:10:31 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 01:15:49.589 05:10:31 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 01:15:49.589 05:10:31 thread -- scripts/common.sh@336 -- # IFS=.-: 01:15:49.589 05:10:31 thread -- scripts/common.sh@336 -- # read -ra ver1 01:15:49.589 05:10:31 thread -- scripts/common.sh@337 -- # IFS=.-: 01:15:49.589 05:10:31 thread -- scripts/common.sh@337 -- # read -ra ver2 01:15:49.589 05:10:31 thread -- scripts/common.sh@338 -- # local 'op=<' 01:15:49.589 05:10:31 thread -- scripts/common.sh@340 -- # ver1_l=2 01:15:49.589 05:10:31 thread -- scripts/common.sh@341 -- # ver2_l=1 01:15:49.589 05:10:31 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:15:49.589 05:10:31 thread -- scripts/common.sh@344 -- # case "$op" in 01:15:49.589 05:10:31 thread -- scripts/common.sh@345 -- # : 1 01:15:49.589 05:10:31 thread -- scripts/common.sh@364 -- # (( v = 0 )) 01:15:49.589 05:10:31 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:15:49.589 05:10:31 thread -- scripts/common.sh@365 -- # decimal 1 01:15:49.589 05:10:31 thread -- scripts/common.sh@353 -- # local d=1 01:15:49.589 05:10:31 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:15:49.589 05:10:31 thread -- scripts/common.sh@355 -- # echo 1 01:15:49.589 05:10:31 thread -- scripts/common.sh@365 -- # ver1[v]=1 01:15:49.589 05:10:31 thread -- scripts/common.sh@366 -- # decimal 2 01:15:49.589 05:10:31 thread -- scripts/common.sh@353 -- # local d=2 01:15:49.589 05:10:31 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:15:49.589 05:10:31 thread -- scripts/common.sh@355 -- # echo 2 01:15:49.589 05:10:31 thread -- scripts/common.sh@366 -- # ver2[v]=2 01:15:49.589 05:10:31 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:15:49.589 05:10:31 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:15:49.589 05:10:31 thread -- scripts/common.sh@368 -- # return 0 01:15:49.589 05:10:31 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:15:49.589 05:10:31 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:15:49.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:49.589 --rc genhtml_branch_coverage=1 01:15:49.589 --rc genhtml_function_coverage=1 01:15:49.589 --rc genhtml_legend=1 01:15:49.589 --rc geninfo_all_blocks=1 01:15:49.589 --rc geninfo_unexecuted_blocks=1 01:15:49.589 01:15:49.589 ' 01:15:49.589 05:10:31 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:15:49.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:49.589 --rc genhtml_branch_coverage=1 01:15:49.589 --rc genhtml_function_coverage=1 01:15:49.589 --rc genhtml_legend=1 01:15:49.589 --rc geninfo_all_blocks=1 01:15:49.589 --rc geninfo_unexecuted_blocks=1 01:15:49.589 01:15:49.589 ' 01:15:49.589 05:10:31 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:15:49.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:49.589 --rc genhtml_branch_coverage=1 01:15:49.589 --rc genhtml_function_coverage=1 01:15:49.589 --rc genhtml_legend=1 01:15:49.589 --rc geninfo_all_blocks=1 01:15:49.589 --rc geninfo_unexecuted_blocks=1 01:15:49.589 01:15:49.589 ' 01:15:49.589 05:10:31 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:15:49.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:49.589 --rc genhtml_branch_coverage=1 01:15:49.589 --rc genhtml_function_coverage=1 01:15:49.589 --rc genhtml_legend=1 01:15:49.589 --rc geninfo_all_blocks=1 01:15:49.589 --rc geninfo_unexecuted_blocks=1 01:15:49.589 01:15:49.589 ' 01:15:49.589 05:10:31 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 01:15:49.589 05:10:31 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 01:15:49.589 05:10:31 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 01:15:49.589 05:10:31 thread -- common/autotest_common.sh@10 -- # set +x 01:15:49.589 ************************************ 01:15:49.589 START TEST thread_poller_perf 01:15:49.589 ************************************ 01:15:49.589 05:10:31 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 01:15:49.589 [2024-12-09 05:10:31.910777] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:15:49.589 [2024-12-09 05:10:31.910876] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59318 ] 01:15:49.849 [2024-12-09 05:10:32.067210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:15:49.849 Running 1000 pollers for 1 seconds with 1 microseconds period. 01:15:49.849 [2024-12-09 05:10:32.141285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:15:51.226 [2024-12-09T05:10:33.682Z] ====================================== 01:15:51.226 [2024-12-09T05:10:33.682Z] busy:2297685318 (cyc) 01:15:51.226 [2024-12-09T05:10:33.682Z] total_run_count: 382000 01:15:51.226 [2024-12-09T05:10:33.682Z] tsc_hz: 2290000000 (cyc) 01:15:51.226 [2024-12-09T05:10:33.682Z] ====================================== 01:15:51.226 [2024-12-09T05:10:33.682Z] poller_cost: 6014 (cyc), 2626 (nsec) 01:15:51.226 01:15:51.226 real 0m1.365s 01:15:51.226 user 0m1.207s 01:15:51.226 sys 0m0.051s 01:15:51.226 05:10:33 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 01:15:51.226 05:10:33 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 01:15:51.226 ************************************ 01:15:51.226 END TEST thread_poller_perf 01:15:51.226 ************************************ 01:15:51.226 05:10:33 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 01:15:51.226 05:10:33 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 01:15:51.226 05:10:33 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 01:15:51.226 05:10:33 thread -- common/autotest_common.sh@10 -- # set +x 01:15:51.226 ************************************ 01:15:51.226 START TEST thread_poller_perf 01:15:51.226 ************************************ 01:15:51.226 05:10:33 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 01:15:51.226 [2024-12-09 05:10:33.339203] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:15:51.226 [2024-12-09 05:10:33.339419] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59354 ] 01:15:51.226 [2024-12-09 05:10:33.496110] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:15:51.226 [2024-12-09 05:10:33.573541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:15:51.226 Running 1000 pollers for 1 seconds with 0 microseconds period. 01:15:52.605 [2024-12-09T05:10:35.061Z] ====================================== 01:15:52.605 [2024-12-09T05:10:35.061Z] busy:2291953666 (cyc) 01:15:52.605 [2024-12-09T05:10:35.061Z] total_run_count: 5325000 01:15:52.605 [2024-12-09T05:10:35.061Z] tsc_hz: 2290000000 (cyc) 01:15:52.605 [2024-12-09T05:10:35.061Z] ====================================== 01:15:52.605 [2024-12-09T05:10:35.061Z] poller_cost: 430 (cyc), 187 (nsec) 01:15:52.605 01:15:52.605 real 0m1.365s 01:15:52.605 user 0m1.207s 01:15:52.605 sys 0m0.050s 01:15:52.605 05:10:34 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 01:15:52.605 05:10:34 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 01:15:52.605 ************************************ 01:15:52.605 END TEST thread_poller_perf 01:15:52.605 ************************************ 01:15:52.605 05:10:34 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 01:15:52.605 01:15:52.605 real 0m3.086s 01:15:52.605 user 0m2.562s 01:15:52.605 sys 0m0.326s 01:15:52.605 05:10:34 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 01:15:52.605 05:10:34 thread -- common/autotest_common.sh@10 -- # set +x 01:15:52.605 ************************************ 01:15:52.605 END TEST thread 01:15:52.605 ************************************ 01:15:52.605 05:10:34 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 01:15:52.605 05:10:34 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 01:15:52.605 05:10:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:15:52.605 05:10:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:15:52.605 05:10:34 -- common/autotest_common.sh@10 -- # set +x 01:15:52.605 ************************************ 01:15:52.605 START TEST app_cmdline 01:15:52.605 ************************************ 01:15:52.605 05:10:34 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 01:15:52.605 * Looking for test storage... 01:15:52.605 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 01:15:52.605 05:10:34 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:15:52.605 05:10:34 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 01:15:52.605 05:10:34 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:15:52.605 05:10:35 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:15:52.605 05:10:35 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:15:52.605 05:10:35 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 01:15:52.605 05:10:35 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 01:15:52.605 05:10:35 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 01:15:52.605 05:10:35 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 01:15:52.605 05:10:35 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 01:15:52.605 05:10:35 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 01:15:52.605 05:10:35 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 01:15:52.605 05:10:35 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 01:15:52.605 05:10:35 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 01:15:52.605 05:10:35 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:15:52.605 05:10:35 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 01:15:52.605 05:10:35 app_cmdline -- scripts/common.sh@345 -- # : 1 01:15:52.605 05:10:35 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 01:15:52.605 05:10:35 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:15:52.605 05:10:35 app_cmdline -- scripts/common.sh@365 -- # decimal 1 01:15:52.605 05:10:35 app_cmdline -- scripts/common.sh@353 -- # local d=1 01:15:52.605 05:10:35 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:15:52.605 05:10:35 app_cmdline -- scripts/common.sh@355 -- # echo 1 01:15:52.605 05:10:35 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 01:15:52.605 05:10:35 app_cmdline -- scripts/common.sh@366 -- # decimal 2 01:15:52.605 05:10:35 app_cmdline -- scripts/common.sh@353 -- # local d=2 01:15:52.605 05:10:35 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:15:52.605 05:10:35 app_cmdline -- scripts/common.sh@355 -- # echo 2 01:15:52.605 05:10:35 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 01:15:52.605 05:10:35 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:15:52.605 05:10:35 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:15:52.605 05:10:35 app_cmdline -- scripts/common.sh@368 -- # return 0 01:15:52.605 05:10:35 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:15:52.605 05:10:35 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:15:52.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:52.605 --rc genhtml_branch_coverage=1 01:15:52.605 --rc genhtml_function_coverage=1 01:15:52.605 --rc genhtml_legend=1 01:15:52.605 --rc geninfo_all_blocks=1 01:15:52.605 --rc geninfo_unexecuted_blocks=1 01:15:52.605 01:15:52.605 ' 01:15:52.605 05:10:35 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:15:52.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:52.605 --rc genhtml_branch_coverage=1 01:15:52.605 --rc genhtml_function_coverage=1 01:15:52.605 --rc genhtml_legend=1 01:15:52.605 --rc geninfo_all_blocks=1 01:15:52.605 --rc geninfo_unexecuted_blocks=1 01:15:52.605 01:15:52.605 ' 01:15:52.605 05:10:35 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:15:52.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:52.605 --rc genhtml_branch_coverage=1 01:15:52.605 --rc genhtml_function_coverage=1 01:15:52.605 --rc genhtml_legend=1 01:15:52.605 --rc geninfo_all_blocks=1 01:15:52.605 --rc geninfo_unexecuted_blocks=1 01:15:52.605 01:15:52.605 ' 01:15:52.605 05:10:35 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:15:52.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:52.605 --rc genhtml_branch_coverage=1 01:15:52.605 --rc genhtml_function_coverage=1 01:15:52.605 --rc genhtml_legend=1 01:15:52.605 --rc geninfo_all_blocks=1 01:15:52.605 --rc geninfo_unexecuted_blocks=1 01:15:52.605 01:15:52.605 ' 01:15:52.605 05:10:35 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 01:15:52.605 05:10:35 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59436 01:15:52.605 05:10:35 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 01:15:52.605 05:10:35 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59436 01:15:52.605 05:10:35 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59436 ']' 01:15:52.605 05:10:35 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:15:52.605 05:10:35 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 01:15:52.605 05:10:35 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:15:52.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:15:52.605 05:10:35 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 01:15:52.605 05:10:35 app_cmdline -- common/autotest_common.sh@10 -- # set +x 01:15:52.863 [2024-12-09 05:10:35.088468] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:15:52.863 [2024-12-09 05:10:35.088543] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59436 ] 01:15:52.863 [2024-12-09 05:10:35.239640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:15:52.863 [2024-12-09 05:10:35.317468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:15:53.121 [2024-12-09 05:10:35.417857] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:15:53.708 05:10:35 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:15:53.708 05:10:35 app_cmdline -- common/autotest_common.sh@868 -- # return 0 01:15:53.708 05:10:35 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 01:15:54.007 { 01:15:54.008 "version": "SPDK v25.01-pre git sha1 cabd61f7f", 01:15:54.008 "fields": { 01:15:54.008 "major": 25, 01:15:54.008 "minor": 1, 01:15:54.008 "patch": 0, 01:15:54.008 "suffix": "-pre", 01:15:54.008 "commit": "cabd61f7f" 01:15:54.008 } 01:15:54.008 } 01:15:54.008 05:10:36 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 01:15:54.008 05:10:36 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 01:15:54.008 05:10:36 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 01:15:54.008 05:10:36 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 01:15:54.008 05:10:36 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 01:15:54.008 05:10:36 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 01:15:54.008 05:10:36 app_cmdline -- app/cmdline.sh@26 -- # sort 01:15:54.008 05:10:36 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 01:15:54.008 05:10:36 app_cmdline -- common/autotest_common.sh@10 -- # set +x 01:15:54.008 05:10:36 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:15:54.008 05:10:36 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 01:15:54.008 05:10:36 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 01:15:54.008 05:10:36 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 01:15:54.008 05:10:36 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 01:15:54.008 05:10:36 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 01:15:54.008 05:10:36 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:15:54.008 05:10:36 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:15:54.008 05:10:36 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:15:54.008 05:10:36 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:15:54.008 05:10:36 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:15:54.008 05:10:36 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:15:54.008 05:10:36 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:15:54.008 05:10:36 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 01:15:54.008 05:10:36 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 01:15:54.008 request: 01:15:54.008 { 01:15:54.008 "method": "env_dpdk_get_mem_stats", 01:15:54.008 "req_id": 1 01:15:54.008 } 01:15:54.008 Got JSON-RPC error response 01:15:54.008 response: 01:15:54.008 { 01:15:54.008 "code": -32601, 01:15:54.008 "message": "Method not found" 01:15:54.008 } 01:15:54.267 05:10:36 app_cmdline -- common/autotest_common.sh@655 -- # es=1 01:15:54.267 05:10:36 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:15:54.267 05:10:36 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:15:54.267 05:10:36 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:15:54.267 05:10:36 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59436 01:15:54.267 05:10:36 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59436 ']' 01:15:54.267 05:10:36 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59436 01:15:54.267 05:10:36 app_cmdline -- common/autotest_common.sh@959 -- # uname 01:15:54.267 05:10:36 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:15:54.267 05:10:36 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59436 01:15:54.267 05:10:36 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:15:54.267 05:10:36 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:15:54.267 05:10:36 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59436' 01:15:54.267 killing process with pid 59436 01:15:54.267 05:10:36 app_cmdline -- common/autotest_common.sh@973 -- # kill 59436 01:15:54.267 05:10:36 app_cmdline -- common/autotest_common.sh@978 -- # wait 59436 01:15:54.835 ************************************ 01:15:54.835 END TEST app_cmdline 01:15:54.835 ************************************ 01:15:54.835 01:15:54.835 real 0m2.301s 01:15:54.835 user 0m2.523s 01:15:54.835 sys 0m0.634s 01:15:54.835 05:10:37 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 01:15:54.835 05:10:37 app_cmdline -- common/autotest_common.sh@10 -- # set +x 01:15:54.835 05:10:37 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 01:15:54.835 05:10:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:15:54.835 05:10:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:15:54.835 05:10:37 -- common/autotest_common.sh@10 -- # set +x 01:15:54.835 ************************************ 01:15:54.835 START TEST version 01:15:54.835 ************************************ 01:15:54.835 05:10:37 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 01:15:54.835 * Looking for test storage... 01:15:54.835 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 01:15:54.835 05:10:37 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:15:54.835 05:10:37 version -- common/autotest_common.sh@1693 -- # lcov --version 01:15:54.835 05:10:37 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:15:55.095 05:10:37 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:15:55.095 05:10:37 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:15:55.095 05:10:37 version -- scripts/common.sh@333 -- # local ver1 ver1_l 01:15:55.095 05:10:37 version -- scripts/common.sh@334 -- # local ver2 ver2_l 01:15:55.095 05:10:37 version -- scripts/common.sh@336 -- # IFS=.-: 01:15:55.095 05:10:37 version -- scripts/common.sh@336 -- # read -ra ver1 01:15:55.095 05:10:37 version -- scripts/common.sh@337 -- # IFS=.-: 01:15:55.095 05:10:37 version -- scripts/common.sh@337 -- # read -ra ver2 01:15:55.095 05:10:37 version -- scripts/common.sh@338 -- # local 'op=<' 01:15:55.095 05:10:37 version -- scripts/common.sh@340 -- # ver1_l=2 01:15:55.095 05:10:37 version -- scripts/common.sh@341 -- # ver2_l=1 01:15:55.095 05:10:37 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:15:55.095 05:10:37 version -- scripts/common.sh@344 -- # case "$op" in 01:15:55.095 05:10:37 version -- scripts/common.sh@345 -- # : 1 01:15:55.095 05:10:37 version -- scripts/common.sh@364 -- # (( v = 0 )) 01:15:55.095 05:10:37 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:15:55.095 05:10:37 version -- scripts/common.sh@365 -- # decimal 1 01:15:55.095 05:10:37 version -- scripts/common.sh@353 -- # local d=1 01:15:55.095 05:10:37 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:15:55.095 05:10:37 version -- scripts/common.sh@355 -- # echo 1 01:15:55.095 05:10:37 version -- scripts/common.sh@365 -- # ver1[v]=1 01:15:55.095 05:10:37 version -- scripts/common.sh@366 -- # decimal 2 01:15:55.095 05:10:37 version -- scripts/common.sh@353 -- # local d=2 01:15:55.095 05:10:37 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:15:55.095 05:10:37 version -- scripts/common.sh@355 -- # echo 2 01:15:55.095 05:10:37 version -- scripts/common.sh@366 -- # ver2[v]=2 01:15:55.095 05:10:37 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:15:55.095 05:10:37 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:15:55.095 05:10:37 version -- scripts/common.sh@368 -- # return 0 01:15:55.095 05:10:37 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:15:55.095 05:10:37 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:15:55.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:55.095 --rc genhtml_branch_coverage=1 01:15:55.095 --rc genhtml_function_coverage=1 01:15:55.095 --rc genhtml_legend=1 01:15:55.095 --rc geninfo_all_blocks=1 01:15:55.095 --rc geninfo_unexecuted_blocks=1 01:15:55.095 01:15:55.095 ' 01:15:55.095 05:10:37 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:15:55.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:55.095 --rc genhtml_branch_coverage=1 01:15:55.095 --rc genhtml_function_coverage=1 01:15:55.095 --rc genhtml_legend=1 01:15:55.095 --rc geninfo_all_blocks=1 01:15:55.095 --rc geninfo_unexecuted_blocks=1 01:15:55.095 01:15:55.095 ' 01:15:55.095 05:10:37 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:15:55.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:55.095 --rc genhtml_branch_coverage=1 01:15:55.095 --rc genhtml_function_coverage=1 01:15:55.095 --rc genhtml_legend=1 01:15:55.095 --rc geninfo_all_blocks=1 01:15:55.095 --rc geninfo_unexecuted_blocks=1 01:15:55.095 01:15:55.095 ' 01:15:55.095 05:10:37 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:15:55.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:55.095 --rc genhtml_branch_coverage=1 01:15:55.095 --rc genhtml_function_coverage=1 01:15:55.095 --rc genhtml_legend=1 01:15:55.095 --rc geninfo_all_blocks=1 01:15:55.095 --rc geninfo_unexecuted_blocks=1 01:15:55.095 01:15:55.095 ' 01:15:55.095 05:10:37 version -- app/version.sh@17 -- # get_header_version major 01:15:55.095 05:10:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 01:15:55.095 05:10:37 version -- app/version.sh@14 -- # cut -f2 01:15:55.095 05:10:37 version -- app/version.sh@14 -- # tr -d '"' 01:15:55.095 05:10:37 version -- app/version.sh@17 -- # major=25 01:15:55.095 05:10:37 version -- app/version.sh@18 -- # get_header_version minor 01:15:55.095 05:10:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 01:15:55.095 05:10:37 version -- app/version.sh@14 -- # cut -f2 01:15:55.095 05:10:37 version -- app/version.sh@14 -- # tr -d '"' 01:15:55.095 05:10:37 version -- app/version.sh@18 -- # minor=1 01:15:55.095 05:10:37 version -- app/version.sh@19 -- # get_header_version patch 01:15:55.095 05:10:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 01:15:55.095 05:10:37 version -- app/version.sh@14 -- # cut -f2 01:15:55.095 05:10:37 version -- app/version.sh@14 -- # tr -d '"' 01:15:55.095 05:10:37 version -- app/version.sh@19 -- # patch=0 01:15:55.095 05:10:37 version -- app/version.sh@20 -- # get_header_version suffix 01:15:55.095 05:10:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 01:15:55.095 05:10:37 version -- app/version.sh@14 -- # cut -f2 01:15:55.095 05:10:37 version -- app/version.sh@14 -- # tr -d '"' 01:15:55.095 05:10:37 version -- app/version.sh@20 -- # suffix=-pre 01:15:55.095 05:10:37 version -- app/version.sh@22 -- # version=25.1 01:15:55.095 05:10:37 version -- app/version.sh@25 -- # (( patch != 0 )) 01:15:55.095 05:10:37 version -- app/version.sh@28 -- # version=25.1rc0 01:15:55.095 05:10:37 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 01:15:55.095 05:10:37 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 01:15:55.095 05:10:37 version -- app/version.sh@30 -- # py_version=25.1rc0 01:15:55.095 05:10:37 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 01:15:55.095 ************************************ 01:15:55.095 END TEST version 01:15:55.095 ************************************ 01:15:55.095 01:15:55.095 real 0m0.308s 01:15:55.095 user 0m0.189s 01:15:55.095 sys 0m0.173s 01:15:55.095 05:10:37 version -- common/autotest_common.sh@1130 -- # xtrace_disable 01:15:55.095 05:10:37 version -- common/autotest_common.sh@10 -- # set +x 01:15:55.095 05:10:37 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 01:15:55.095 05:10:37 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 01:15:55.095 05:10:37 -- spdk/autotest.sh@194 -- # uname -s 01:15:55.095 05:10:37 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 01:15:55.095 05:10:37 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 01:15:55.095 05:10:37 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 01:15:55.095 05:10:37 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 01:15:55.095 05:10:37 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 01:15:55.095 05:10:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:15:55.095 05:10:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:15:55.095 05:10:37 -- common/autotest_common.sh@10 -- # set +x 01:15:55.095 ************************************ 01:15:55.095 START TEST spdk_dd 01:15:55.095 ************************************ 01:15:55.095 05:10:37 spdk_dd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 01:15:55.354 * Looking for test storage... 01:15:55.354 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 01:15:55.354 05:10:37 spdk_dd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:15:55.354 05:10:37 spdk_dd -- common/autotest_common.sh@1693 -- # lcov --version 01:15:55.354 05:10:37 spdk_dd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:15:55.354 05:10:37 spdk_dd -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:15:55.354 05:10:37 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:15:55.354 05:10:37 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l 01:15:55.354 05:10:37 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l 01:15:55.354 05:10:37 spdk_dd -- scripts/common.sh@336 -- # IFS=.-: 01:15:55.354 05:10:37 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1 01:15:55.354 05:10:37 spdk_dd -- scripts/common.sh@337 -- # IFS=.-: 01:15:55.354 05:10:37 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2 01:15:55.354 05:10:37 spdk_dd -- scripts/common.sh@338 -- # local 'op=<' 01:15:55.354 05:10:37 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2 01:15:55.354 05:10:37 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1 01:15:55.354 05:10:37 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:15:55.354 05:10:37 spdk_dd -- scripts/common.sh@344 -- # case "$op" in 01:15:55.354 05:10:37 spdk_dd -- scripts/common.sh@345 -- # : 1 01:15:55.354 05:10:37 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 )) 01:15:55.354 05:10:37 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:15:55.354 05:10:37 spdk_dd -- scripts/common.sh@365 -- # decimal 1 01:15:55.354 05:10:37 spdk_dd -- scripts/common.sh@353 -- # local d=1 01:15:55.354 05:10:37 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:15:55.354 05:10:37 spdk_dd -- scripts/common.sh@355 -- # echo 1 01:15:55.354 05:10:37 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1 01:15:55.354 05:10:37 spdk_dd -- scripts/common.sh@366 -- # decimal 2 01:15:55.354 05:10:37 spdk_dd -- scripts/common.sh@353 -- # local d=2 01:15:55.354 05:10:37 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:15:55.354 05:10:37 spdk_dd -- scripts/common.sh@355 -- # echo 2 01:15:55.354 05:10:37 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2 01:15:55.354 05:10:37 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:15:55.354 05:10:37 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:15:55.354 05:10:37 spdk_dd -- scripts/common.sh@368 -- # return 0 01:15:55.354 05:10:37 spdk_dd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:15:55.354 05:10:37 spdk_dd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:15:55.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:55.354 --rc genhtml_branch_coverage=1 01:15:55.354 --rc genhtml_function_coverage=1 01:15:55.354 --rc genhtml_legend=1 01:15:55.354 --rc geninfo_all_blocks=1 01:15:55.354 --rc geninfo_unexecuted_blocks=1 01:15:55.354 01:15:55.354 ' 01:15:55.354 05:10:37 spdk_dd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:15:55.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:55.354 --rc genhtml_branch_coverage=1 01:15:55.354 --rc genhtml_function_coverage=1 01:15:55.354 --rc genhtml_legend=1 01:15:55.354 --rc geninfo_all_blocks=1 01:15:55.354 --rc geninfo_unexecuted_blocks=1 01:15:55.354 01:15:55.354 ' 01:15:55.354 05:10:37 spdk_dd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:15:55.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:55.354 --rc genhtml_branch_coverage=1 01:15:55.354 --rc genhtml_function_coverage=1 01:15:55.354 --rc genhtml_legend=1 01:15:55.354 --rc geninfo_all_blocks=1 01:15:55.354 --rc geninfo_unexecuted_blocks=1 01:15:55.354 01:15:55.354 ' 01:15:55.354 05:10:37 spdk_dd -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:15:55.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:55.354 --rc genhtml_branch_coverage=1 01:15:55.354 --rc genhtml_function_coverage=1 01:15:55.354 --rc genhtml_legend=1 01:15:55.354 --rc geninfo_all_blocks=1 01:15:55.354 --rc geninfo_unexecuted_blocks=1 01:15:55.354 01:15:55.354 ' 01:15:55.354 05:10:37 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:15:55.354 05:10:37 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob 01:15:55.354 05:10:37 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:15:55.354 05:10:37 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:15:55.354 05:10:37 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:15:55.354 05:10:37 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:15:55.354 05:10:37 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:15:55.355 05:10:37 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:15:55.355 05:10:37 spdk_dd -- paths/export.sh@5 -- # export PATH 01:15:55.355 05:10:37 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:15:55.355 05:10:37 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:15:55.921 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:15:55.921 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 01:15:55.921 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 01:15:55.921 05:10:38 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 01:15:55.921 05:10:38 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 01:15:55.921 05:10:38 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs 01:15:55.921 05:10:38 spdk_dd -- scripts/common.sh@313 -- # local nvmes 01:15:55.921 05:10:38 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]] 01:15:55.921 05:10:38 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 01:15:55.921 05:10:38 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 01:15:55.921 05:10:38 spdk_dd -- scripts/common.sh@298 -- # local bdf= 01:15:55.921 05:10:38 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 01:15:55.921 05:10:38 spdk_dd -- scripts/common.sh@233 -- # local class 01:15:55.921 05:10:38 spdk_dd -- scripts/common.sh@234 -- # local subclass 01:15:55.921 05:10:38 spdk_dd -- scripts/common.sh@235 -- # local progif 01:15:55.921 05:10:38 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1 01:15:55.921 05:10:38 spdk_dd -- scripts/common.sh@236 -- # class=01 01:15:55.921 05:10:38 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8 01:15:55.921 05:10:38 spdk_dd -- scripts/common.sh@237 -- # subclass=08 01:15:55.921 05:10:38 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2 01:15:55.921 05:10:38 spdk_dd -- scripts/common.sh@238 -- # progif=02 01:15:55.921 05:10:38 spdk_dd -- scripts/common.sh@240 -- # hash lspci 01:15:55.921 05:10:38 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 01:15:55.921 05:10:38 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D 01:15:55.921 05:10:38 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02 01:15:55.921 05:10:38 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 01:15:55.921 05:10:38 spdk_dd -- scripts/common.sh@245 -- # tr -d '"' 01:15:55.922 05:10:38 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 01:15:55.922 05:10:38 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 01:15:55.922 05:10:38 spdk_dd -- scripts/common.sh@18 -- # local i 01:15:55.922 05:10:38 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 01:15:55.922 05:10:38 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 01:15:55.922 05:10:38 spdk_dd -- scripts/common.sh@27 -- # return 0 01:15:55.922 05:10:38 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0 01:15:55.922 05:10:38 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 01:15:55.922 05:10:38 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 01:15:55.922 05:10:38 spdk_dd -- scripts/common.sh@18 -- # local i 01:15:55.922 05:10:38 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 01:15:55.922 05:10:38 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 01:15:55.922 05:10:38 spdk_dd -- scripts/common.sh@27 -- # return 0 01:15:55.922 05:10:38 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:11.0 01:15:55.922 05:10:38 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 01:15:56.183 05:10:38 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 01:15:56.183 05:10:38 spdk_dd -- scripts/common.sh@323 -- # uname -s 01:15:56.183 05:10:38 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 01:15:56.183 05:10:38 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 01:15:56.183 05:10:38 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 01:15:56.183 05:10:38 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 01:15:56.183 05:10:38 spdk_dd -- scripts/common.sh@323 -- # uname -s 01:15:56.183 05:10:38 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 01:15:56.183 05:10:38 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 01:15:56.183 05:10:38 spdk_dd -- scripts/common.sh@328 -- # (( 2 )) 01:15:56.183 05:10:38 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 01:15:56.183 05:10:38 spdk_dd -- dd/dd.sh@13 -- # check_liburing 01:15:56.183 05:10:38 spdk_dd -- dd/common.sh@139 -- # local lib 01:15:56.183 05:10:38 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 01:15:56.183 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.183 05:10:38 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:15:56.183 05:10:38 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 01:15:56.183 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 01:15:56.183 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.183 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 01:15:56.183 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.183 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.1 == liburing.so.* ]] 01:15:56.183 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.183 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 01:15:56.183 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.183 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 01:15:56.183 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.183 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 01:15:56.183 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.183 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 01:15:56.183 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.183 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 01:15:56.183 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.183 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 01:15:56.183 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.183 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 01:15:56.183 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.183 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 01:15:56.183 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.183 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 01:15:56.183 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.183 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.11.0 == liburing.so.* ]] 01:15:56.183 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.183 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.12.0 == liburing.so.* ]] 01:15:56.183 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.183 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.11.0 == liburing.so.* ]] 01:15:56.183 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.183 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.12.0 == liburing.so.* ]] 01:15:56.183 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.183 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.15.0 == liburing.so.* ]] 01:15:56.183 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.183 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.7.0 == liburing.so.* ]] 01:15:56.183 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.183 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 01:15:56.183 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.183 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 01:15:56.183 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.1 == liburing.so.* ]] 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.1 == liburing.so.* ]] 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.2.0 == liburing.so.* ]] 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev_aio.so.1.0 == liburing.so.* ]] 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev.so.2.0 == liburing.so.* ]] 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.17.0 == liburing.so.* ]] 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.5.0 == liburing.so.* ]] 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.6.0 == liburing.so.* ]] 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.11.0 == liburing.so.* ]] 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.11.0 == liburing.so.* ]] 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.2.0 == liburing.so.* ]] 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.1 == liburing.so.* ]] 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.1 == liburing.so.* ]] 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.184 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 01:15:56.185 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.185 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 01:15:56.185 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.185 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 01:15:56.185 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.185 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 01:15:56.185 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.185 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 01:15:56.185 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.185 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 01:15:56.185 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.185 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 01:15:56.185 05:10:38 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 01:15:56.185 05:10:38 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 01:15:56.185 05:10:38 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 01:15:56.185 * spdk_dd linked to liburing 01:15:56.185 05:10:38 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 01:15:56.185 05:10:38 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@21 -- # CONFIG_LTO=n 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@23 -- # CONFIG_CET=n 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@30 -- # CONFIG_UBLK=y 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@33 -- # CONFIG_OCF=n 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUSE=n 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@42 -- # CONFIG_VHOST=y 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DAOS=n 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@51 -- # CONFIG_RDMA=y 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@56 -- # CONFIG_XNVME=n 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@58 -- # CONFIG_ARCH=native 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=y 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@61 -- # CONFIG_WERROR=y 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@67 -- # CONFIG_ISAL=y 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@71 -- # CONFIG_APPS=y 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@72 -- # CONFIG_SHARED=y 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@76 -- # CONFIG_FC=n 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@81 -- # CONFIG_TESTS=y 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 01:15:56.185 05:10:38 spdk_dd -- common/build_config.sh@90 -- # CONFIG_URING=y 01:15:56.185 05:10:38 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 01:15:56.185 05:10:38 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 01:15:56.185 05:10:38 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 01:15:56.186 05:10:38 spdk_dd -- dd/common.sh@153 -- # return 0 01:15:56.186 05:10:38 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 01:15:56.186 05:10:38 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 01:15:56.186 05:10:38 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:15:56.186 05:10:38 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 01:15:56.186 05:10:38 spdk_dd -- common/autotest_common.sh@10 -- # set +x 01:15:56.186 ************************************ 01:15:56.186 START TEST spdk_dd_basic_rw 01:15:56.186 ************************************ 01:15:56.186 05:10:38 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 01:15:56.186 * Looking for test storage... 01:15:56.186 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 01:15:56.186 05:10:38 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:15:56.186 05:10:38 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # lcov --version 01:15:56.186 05:10:38 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:15:56.447 05:10:38 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:15:56.447 05:10:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:15:56.447 05:10:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l 01:15:56.448 05:10:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l 01:15:56.448 05:10:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-: 01:15:56.448 05:10:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1 01:15:56.448 05:10:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-: 01:15:56.448 05:10:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2 01:15:56.448 05:10:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<' 01:15:56.448 05:10:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2 01:15:56.448 05:10:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1 01:15:56.448 05:10:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:15:56.448 05:10:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in 01:15:56.448 05:10:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1 01:15:56.448 05:10:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 )) 01:15:56.448 05:10:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:15:56.448 05:10:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1 01:15:56.448 05:10:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1 01:15:56.448 05:10:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:15:56.448 05:10:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1 01:15:56.448 05:10:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1 01:15:56.448 05:10:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2 01:15:56.448 05:10:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2 01:15:56.448 05:10:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:15:56.448 05:10:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2 01:15:56.448 05:10:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2 01:15:56.448 05:10:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:15:56.448 05:10:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:15:56.448 05:10:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0 01:15:56.448 05:10:38 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:15:56.448 05:10:38 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:15:56.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:56.448 --rc genhtml_branch_coverage=1 01:15:56.448 --rc genhtml_function_coverage=1 01:15:56.448 --rc genhtml_legend=1 01:15:56.448 --rc geninfo_all_blocks=1 01:15:56.448 --rc geninfo_unexecuted_blocks=1 01:15:56.448 01:15:56.448 ' 01:15:56.448 05:10:38 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:15:56.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:56.448 --rc genhtml_branch_coverage=1 01:15:56.448 --rc genhtml_function_coverage=1 01:15:56.448 --rc genhtml_legend=1 01:15:56.448 --rc geninfo_all_blocks=1 01:15:56.448 --rc geninfo_unexecuted_blocks=1 01:15:56.448 01:15:56.448 ' 01:15:56.448 05:10:38 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:15:56.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:56.448 --rc genhtml_branch_coverage=1 01:15:56.448 --rc genhtml_function_coverage=1 01:15:56.448 --rc genhtml_legend=1 01:15:56.448 --rc geninfo_all_blocks=1 01:15:56.448 --rc geninfo_unexecuted_blocks=1 01:15:56.448 01:15:56.448 ' 01:15:56.448 05:10:38 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:15:56.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:15:56.448 --rc genhtml_branch_coverage=1 01:15:56.448 --rc genhtml_function_coverage=1 01:15:56.448 --rc genhtml_legend=1 01:15:56.448 --rc geninfo_all_blocks=1 01:15:56.448 --rc geninfo_unexecuted_blocks=1 01:15:56.448 01:15:56.448 ' 01:15:56.448 05:10:38 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:15:56.448 05:10:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob 01:15:56.448 05:10:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:15:56.448 05:10:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:15:56.448 05:10:38 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:15:56.448 05:10:38 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:15:56.448 05:10:38 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:15:56.448 05:10:38 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:15:56.448 05:10:38 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 01:15:56.448 05:10:38 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:15:56.448 05:10:38 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 01:15:56.448 05:10:38 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 01:15:56.448 05:10:38 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 01:15:56.448 05:10:38 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 01:15:56.448 05:10:38 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 01:15:56.448 05:10:38 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 01:15:56.448 05:10:38 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 01:15:56.448 05:10:38 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 01:15:56.448 05:10:38 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:15:56.448 05:10:38 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 01:15:56.448 05:10:38 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 01:15:56.448 05:10:38 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 01:15:56.448 05:10:38 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 01:15:56.710 05:10:38 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 01:15:56.710 05:10:38 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 01:15:56.711 05:10:38 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 01:15:56.711 05:10:38 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 01:15:56.711 05:10:38 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 01:15:56.711 05:10:38 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 01:15:56.711 05:10:38 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 01:15:56.711 05:10:38 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 01:15:56.711 05:10:38 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 01:15:56.711 05:10:38 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 01:15:56.711 05:10:38 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 01:15:56.711 05:10:38 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 01:15:56.711 05:10:38 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 01:15:56.711 05:10:38 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 01:15:56.711 ************************************ 01:15:56.711 START TEST dd_bs_lt_native_bs 01:15:56.711 ************************************ 01:15:56.711 05:10:38 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1129 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 01:15:56.711 05:10:38 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # local es=0 01:15:56.711 05:10:38 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 01:15:56.711 05:10:38 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:15:56.711 05:10:38 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:15:56.711 05:10:38 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:15:56.711 05:10:38 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:15:56.711 05:10:38 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:15:56.711 05:10:38 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:15:56.711 05:10:38 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:15:56.711 05:10:38 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 01:15:56.711 05:10:38 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 01:15:56.711 { 01:15:56.711 "subsystems": [ 01:15:56.711 { 01:15:56.711 "subsystem": "bdev", 01:15:56.711 "config": [ 01:15:56.711 { 01:15:56.711 "params": { 01:15:56.711 "trtype": "pcie", 01:15:56.711 "traddr": "0000:00:10.0", 01:15:56.711 "name": "Nvme0" 01:15:56.711 }, 01:15:56.711 "method": "bdev_nvme_attach_controller" 01:15:56.711 }, 01:15:56.711 { 01:15:56.711 "method": "bdev_wait_for_examine" 01:15:56.711 } 01:15:56.711 ] 01:15:56.711 } 01:15:56.711 ] 01:15:56.711 } 01:15:56.711 [2024-12-09 05:10:39.024960] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:15:56.711 [2024-12-09 05:10:39.025033] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59793 ] 01:15:56.971 [2024-12-09 05:10:39.178386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:15:56.971 [2024-12-09 05:10:39.225056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:15:56.971 [2024-12-09 05:10:39.266848] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:15:56.971 [2024-12-09 05:10:39.367690] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 01:15:56.971 [2024-12-09 05:10:39.367743] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:15:57.232 [2024-12-09 05:10:39.469682] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 01:15:57.232 ************************************ 01:15:57.232 END TEST dd_bs_lt_native_bs 01:15:57.232 ************************************ 01:15:57.232 05:10:39 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # es=234 01:15:57.232 05:10:39 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:15:57.232 05:10:39 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@664 -- # es=106 01:15:57.232 05:10:39 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@665 -- # case "$es" in 01:15:57.232 05:10:39 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@672 -- # es=1 01:15:57.232 05:10:39 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:15:57.232 01:15:57.232 real 0m0.607s 01:15:57.232 user 0m0.406s 01:15:57.232 sys 0m0.145s 01:15:57.232 05:10:39 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1130 -- # xtrace_disable 01:15:57.232 05:10:39 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 01:15:57.232 05:10:39 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 01:15:57.232 05:10:39 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:15:57.232 05:10:39 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 01:15:57.232 05:10:39 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 01:15:57.232 ************************************ 01:15:57.232 START TEST dd_rw 01:15:57.232 ************************************ 01:15:57.232 05:10:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1129 -- # basic_rw 4096 01:15:57.232 05:10:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 01:15:57.232 05:10:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 01:15:57.232 05:10:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 01:15:57.232 05:10:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 01:15:57.232 05:10:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 01:15:57.232 05:10:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 01:15:57.232 05:10:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 01:15:57.232 05:10:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 01:15:57.232 05:10:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 01:15:57.232 05:10:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 01:15:57.232 05:10:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 01:15:57.232 05:10:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 01:15:57.232 05:10:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 01:15:57.232 05:10:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 01:15:57.232 05:10:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 01:15:57.232 05:10:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 01:15:57.232 05:10:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 01:15:57.232 05:10:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 01:15:57.802 05:10:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 01:15:57.802 05:10:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 01:15:57.802 05:10:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 01:15:57.802 05:10:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 01:15:57.802 [2024-12-09 05:10:40.111728] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:15:57.802 [2024-12-09 05:10:40.111866] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59824 ] 01:15:57.802 { 01:15:57.802 "subsystems": [ 01:15:57.802 { 01:15:57.802 "subsystem": "bdev", 01:15:57.802 "config": [ 01:15:57.802 { 01:15:57.802 "params": { 01:15:57.802 "trtype": "pcie", 01:15:57.802 "traddr": "0000:00:10.0", 01:15:57.802 "name": "Nvme0" 01:15:57.802 }, 01:15:57.802 "method": "bdev_nvme_attach_controller" 01:15:57.802 }, 01:15:57.802 { 01:15:57.802 "method": "bdev_wait_for_examine" 01:15:57.802 } 01:15:57.802 ] 01:15:57.802 } 01:15:57.802 ] 01:15:57.802 } 01:15:58.062 [2024-12-09 05:10:40.262828] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:15:58.062 [2024-12-09 05:10:40.307367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:15:58.062 [2024-12-09 05:10:40.348035] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:15:58.062  [2024-12-09T05:10:40.778Z] Copying: 60/60 [kB] (average 29 MBps) 01:15:58.322 01:15:58.322 05:10:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 01:15:58.322 05:10:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 01:15:58.322 05:10:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 01:15:58.322 05:10:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 01:15:58.322 { 01:15:58.322 "subsystems": [ 01:15:58.322 { 01:15:58.322 "subsystem": "bdev", 01:15:58.322 "config": [ 01:15:58.322 { 01:15:58.322 "params": { 01:15:58.322 "trtype": "pcie", 01:15:58.322 "traddr": "0000:00:10.0", 01:15:58.322 "name": "Nvme0" 01:15:58.322 }, 01:15:58.322 "method": "bdev_nvme_attach_controller" 01:15:58.322 }, 01:15:58.322 { 01:15:58.322 "method": "bdev_wait_for_examine" 01:15:58.322 } 01:15:58.322 ] 01:15:58.322 } 01:15:58.322 ] 01:15:58.322 } 01:15:58.322 [2024-12-09 05:10:40.699247] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:15:58.322 [2024-12-09 05:10:40.699315] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59838 ] 01:15:58.583 [2024-12-09 05:10:40.849161] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:15:58.583 [2024-12-09 05:10:40.902588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:15:58.583 [2024-12-09 05:10:40.945839] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:15:58.843  [2024-12-09T05:10:41.299Z] Copying: 60/60 [kB] (average 19 MBps) 01:15:58.843 01:15:58.843 05:10:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:15:58.843 05:10:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 01:15:58.843 05:10:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 01:15:58.843 05:10:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 01:15:58.843 05:10:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 01:15:58.843 05:10:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 01:15:58.843 05:10:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 01:15:58.843 05:10:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 01:15:58.843 05:10:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 01:15:58.843 05:10:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 01:15:58.843 05:10:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 01:15:59.102 [2024-12-09 05:10:41.299748] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:15:59.102 [2024-12-09 05:10:41.299888] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59853 ] 01:15:59.102 { 01:15:59.102 "subsystems": [ 01:15:59.102 { 01:15:59.102 "subsystem": "bdev", 01:15:59.102 "config": [ 01:15:59.102 { 01:15:59.102 "params": { 01:15:59.102 "trtype": "pcie", 01:15:59.102 "traddr": "0000:00:10.0", 01:15:59.102 "name": "Nvme0" 01:15:59.102 }, 01:15:59.102 "method": "bdev_nvme_attach_controller" 01:15:59.102 }, 01:15:59.102 { 01:15:59.102 "method": "bdev_wait_for_examine" 01:15:59.102 } 01:15:59.102 ] 01:15:59.102 } 01:15:59.102 ] 01:15:59.102 } 01:15:59.102 [2024-12-09 05:10:41.452187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:15:59.102 [2024-12-09 05:10:41.497165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:15:59.102 [2024-12-09 05:10:41.538469] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:15:59.360  [2024-12-09T05:10:42.075Z] Copying: 1024/1024 [kB] (average 500 MBps) 01:15:59.619 01:15:59.619 05:10:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 01:15:59.619 05:10:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 01:15:59.619 05:10:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 01:15:59.619 05:10:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 01:15:59.619 05:10:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 01:15:59.619 05:10:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 01:15:59.619 05:10:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 01:15:59.920 05:10:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 01:15:59.920 05:10:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 01:15:59.920 05:10:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 01:15:59.920 05:10:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 01:15:59.920 [2024-12-09 05:10:42.337969] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:15:59.920 [2024-12-09 05:10:42.338099] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59872 ] 01:15:59.920 { 01:15:59.920 "subsystems": [ 01:15:59.920 { 01:15:59.920 "subsystem": "bdev", 01:15:59.920 "config": [ 01:15:59.920 { 01:15:59.920 "params": { 01:15:59.920 "trtype": "pcie", 01:15:59.920 "traddr": "0000:00:10.0", 01:15:59.920 "name": "Nvme0" 01:15:59.920 }, 01:15:59.920 "method": "bdev_nvme_attach_controller" 01:15:59.920 }, 01:15:59.920 { 01:15:59.920 "method": "bdev_wait_for_examine" 01:15:59.920 } 01:15:59.920 ] 01:15:59.920 } 01:15:59.920 ] 01:15:59.920 } 01:16:00.178 [2024-12-09 05:10:42.487875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:00.178 [2024-12-09 05:10:42.537587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:00.178 [2024-12-09 05:10:42.584392] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:16:00.435  [2024-12-09T05:10:43.150Z] Copying: 60/60 [kB] (average 58 MBps) 01:16:00.694 01:16:00.694 05:10:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 01:16:00.694 05:10:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 01:16:00.694 05:10:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 01:16:00.694 05:10:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 01:16:00.694 { 01:16:00.694 "subsystems": [ 01:16:00.694 { 01:16:00.694 "subsystem": "bdev", 01:16:00.694 "config": [ 01:16:00.694 { 01:16:00.694 "params": { 01:16:00.694 "trtype": "pcie", 01:16:00.694 "traddr": "0000:00:10.0", 01:16:00.694 "name": "Nvme0" 01:16:00.694 }, 01:16:00.694 "method": "bdev_nvme_attach_controller" 01:16:00.694 }, 01:16:00.694 { 01:16:00.694 "method": "bdev_wait_for_examine" 01:16:00.694 } 01:16:00.694 ] 01:16:00.694 } 01:16:00.694 ] 01:16:00.694 } 01:16:00.694 [2024-12-09 05:10:42.959435] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:16:00.694 [2024-12-09 05:10:42.959559] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59886 ] 01:16:00.694 [2024-12-09 05:10:43.095760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:00.952 [2024-12-09 05:10:43.149864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:00.952 [2024-12-09 05:10:43.192534] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:16:00.952  [2024-12-09T05:10:43.666Z] Copying: 60/60 [kB] (average 58 MBps) 01:16:01.210 01:16:01.210 05:10:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:16:01.210 05:10:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 01:16:01.210 05:10:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 01:16:01.210 05:10:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 01:16:01.210 05:10:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 01:16:01.210 05:10:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 01:16:01.210 05:10:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 01:16:01.210 05:10:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 01:16:01.210 05:10:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 01:16:01.210 05:10:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 01:16:01.210 05:10:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 01:16:01.210 { 01:16:01.210 "subsystems": [ 01:16:01.210 { 01:16:01.210 "subsystem": "bdev", 01:16:01.210 "config": [ 01:16:01.210 { 01:16:01.210 "params": { 01:16:01.210 "trtype": "pcie", 01:16:01.210 "traddr": "0000:00:10.0", 01:16:01.210 "name": "Nvme0" 01:16:01.210 }, 01:16:01.210 "method": "bdev_nvme_attach_controller" 01:16:01.210 }, 01:16:01.210 { 01:16:01.210 "method": "bdev_wait_for_examine" 01:16:01.210 } 01:16:01.210 ] 01:16:01.210 } 01:16:01.210 ] 01:16:01.210 } 01:16:01.210 [2024-12-09 05:10:43.561671] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:16:01.210 [2024-12-09 05:10:43.561789] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59901 ] 01:16:01.468 [2024-12-09 05:10:43.714026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:01.468 [2024-12-09 05:10:43.762350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:01.468 [2024-12-09 05:10:43.804169] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:16:01.468  [2024-12-09T05:10:44.182Z] Copying: 1024/1024 [kB] (average 1000 MBps) 01:16:01.726 01:16:01.726 05:10:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 01:16:01.726 05:10:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 01:16:01.726 05:10:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 01:16:01.726 05:10:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 01:16:01.726 05:10:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 01:16:01.726 05:10:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 01:16:01.726 05:10:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 01:16:01.726 05:10:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 01:16:02.292 05:10:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 01:16:02.292 05:10:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 01:16:02.292 05:10:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 01:16:02.292 05:10:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 01:16:02.292 [2024-12-09 05:10:44.572674] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:16:02.292 [2024-12-09 05:10:44.572847] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59920 ] 01:16:02.292 { 01:16:02.292 "subsystems": [ 01:16:02.292 { 01:16:02.292 "subsystem": "bdev", 01:16:02.292 "config": [ 01:16:02.292 { 01:16:02.292 "params": { 01:16:02.292 "trtype": "pcie", 01:16:02.292 "traddr": "0000:00:10.0", 01:16:02.292 "name": "Nvme0" 01:16:02.292 }, 01:16:02.292 "method": "bdev_nvme_attach_controller" 01:16:02.292 }, 01:16:02.292 { 01:16:02.292 "method": "bdev_wait_for_examine" 01:16:02.292 } 01:16:02.292 ] 01:16:02.292 } 01:16:02.292 ] 01:16:02.292 } 01:16:02.292 [2024-12-09 05:10:44.725104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:02.550 [2024-12-09 05:10:44.785176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:02.550 [2024-12-09 05:10:44.835612] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:16:02.550  [2024-12-09T05:10:45.263Z] Copying: 56/56 [kB] (average 54 MBps) 01:16:02.807 01:16:02.807 05:10:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 01:16:02.807 05:10:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 01:16:02.807 05:10:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 01:16:02.807 05:10:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 01:16:02.807 [2024-12-09 05:10:45.226607] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:16:02.807 [2024-12-09 05:10:45.226758] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59938 ] 01:16:02.807 { 01:16:02.807 "subsystems": [ 01:16:02.807 { 01:16:02.807 "subsystem": "bdev", 01:16:02.807 "config": [ 01:16:02.807 { 01:16:02.807 "params": { 01:16:02.807 "trtype": "pcie", 01:16:02.807 "traddr": "0000:00:10.0", 01:16:02.807 "name": "Nvme0" 01:16:02.807 }, 01:16:02.807 "method": "bdev_nvme_attach_controller" 01:16:02.807 }, 01:16:02.807 { 01:16:02.807 "method": "bdev_wait_for_examine" 01:16:02.807 } 01:16:02.807 ] 01:16:02.807 } 01:16:02.807 ] 01:16:02.807 } 01:16:03.064 [2024-12-09 05:10:45.383153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:03.064 [2024-12-09 05:10:45.428708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:03.064 [2024-12-09 05:10:45.468762] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:16:03.322  [2024-12-09T05:10:45.778Z] Copying: 56/56 [kB] (average 27 MBps) 01:16:03.322 01:16:03.322 05:10:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:16:03.322 05:10:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 01:16:03.322 05:10:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 01:16:03.323 05:10:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 01:16:03.323 05:10:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 01:16:03.323 05:10:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 01:16:03.323 05:10:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 01:16:03.323 05:10:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 01:16:03.323 05:10:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 01:16:03.323 05:10:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 01:16:03.323 05:10:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 01:16:03.579 { 01:16:03.579 "subsystems": [ 01:16:03.579 { 01:16:03.579 "subsystem": "bdev", 01:16:03.579 "config": [ 01:16:03.579 { 01:16:03.579 "params": { 01:16:03.579 "trtype": "pcie", 01:16:03.579 "traddr": "0000:00:10.0", 01:16:03.579 "name": "Nvme0" 01:16:03.579 }, 01:16:03.579 "method": "bdev_nvme_attach_controller" 01:16:03.579 }, 01:16:03.579 { 01:16:03.579 "method": "bdev_wait_for_examine" 01:16:03.579 } 01:16:03.579 ] 01:16:03.579 } 01:16:03.579 ] 01:16:03.579 } 01:16:03.579 [2024-12-09 05:10:45.822890] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:16:03.579 [2024-12-09 05:10:45.823000] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59955 ] 01:16:03.579 [2024-12-09 05:10:45.973496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:03.579 [2024-12-09 05:10:46.017945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:03.837 [2024-12-09 05:10:46.058387] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:16:03.837  [2024-12-09T05:10:46.551Z] Copying: 1024/1024 [kB] (average 1000 MBps) 01:16:04.095 01:16:04.095 05:10:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 01:16:04.095 05:10:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 01:16:04.095 05:10:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 01:16:04.095 05:10:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 01:16:04.095 05:10:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 01:16:04.095 05:10:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 01:16:04.095 05:10:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 01:16:04.353 05:10:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 01:16:04.353 05:10:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 01:16:04.353 05:10:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 01:16:04.353 05:10:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 01:16:04.353 [2024-12-09 05:10:46.800602] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:16:04.353 [2024-12-09 05:10:46.800665] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59974 ] 01:16:04.353 { 01:16:04.353 "subsystems": [ 01:16:04.353 { 01:16:04.353 "subsystem": "bdev", 01:16:04.353 "config": [ 01:16:04.353 { 01:16:04.353 "params": { 01:16:04.353 "trtype": "pcie", 01:16:04.353 "traddr": "0000:00:10.0", 01:16:04.353 "name": "Nvme0" 01:16:04.353 }, 01:16:04.353 "method": "bdev_nvme_attach_controller" 01:16:04.353 }, 01:16:04.353 { 01:16:04.353 "method": "bdev_wait_for_examine" 01:16:04.353 } 01:16:04.353 ] 01:16:04.353 } 01:16:04.353 ] 01:16:04.353 } 01:16:04.611 [2024-12-09 05:10:46.950886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:04.611 [2024-12-09 05:10:46.995249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:04.611 [2024-12-09 05:10:47.034966] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:16:04.868  [2024-12-09T05:10:47.581Z] Copying: 56/56 [kB] (average 54 MBps) 01:16:05.125 01:16:05.125 05:10:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 01:16:05.125 05:10:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 01:16:05.125 05:10:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 01:16:05.125 05:10:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 01:16:05.125 [2024-12-09 05:10:47.390650] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:16:05.125 [2024-12-09 05:10:47.390711] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59987 ] 01:16:05.125 { 01:16:05.125 "subsystems": [ 01:16:05.125 { 01:16:05.125 "subsystem": "bdev", 01:16:05.125 "config": [ 01:16:05.125 { 01:16:05.125 "params": { 01:16:05.125 "trtype": "pcie", 01:16:05.125 "traddr": "0000:00:10.0", 01:16:05.125 "name": "Nvme0" 01:16:05.125 }, 01:16:05.125 "method": "bdev_nvme_attach_controller" 01:16:05.125 }, 01:16:05.125 { 01:16:05.125 "method": "bdev_wait_for_examine" 01:16:05.125 } 01:16:05.125 ] 01:16:05.125 } 01:16:05.125 ] 01:16:05.125 } 01:16:05.125 [2024-12-09 05:10:47.540210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:05.383 [2024-12-09 05:10:47.588092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:05.383 [2024-12-09 05:10:47.628566] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:16:05.383  [2024-12-09T05:10:48.097Z] Copying: 56/56 [kB] (average 54 MBps) 01:16:05.641 01:16:05.641 05:10:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:16:05.641 05:10:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 01:16:05.641 05:10:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 01:16:05.641 05:10:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 01:16:05.641 05:10:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 01:16:05.641 05:10:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 01:16:05.641 05:10:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 01:16:05.641 05:10:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 01:16:05.641 05:10:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 01:16:05.641 05:10:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 01:16:05.641 05:10:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 01:16:05.641 [2024-12-09 05:10:47.982891] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:16:05.641 [2024-12-09 05:10:47.982950] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60003 ] 01:16:05.641 { 01:16:05.641 "subsystems": [ 01:16:05.641 { 01:16:05.641 "subsystem": "bdev", 01:16:05.641 "config": [ 01:16:05.641 { 01:16:05.641 "params": { 01:16:05.641 "trtype": "pcie", 01:16:05.641 "traddr": "0000:00:10.0", 01:16:05.641 "name": "Nvme0" 01:16:05.641 }, 01:16:05.641 "method": "bdev_nvme_attach_controller" 01:16:05.641 }, 01:16:05.641 { 01:16:05.641 "method": "bdev_wait_for_examine" 01:16:05.641 } 01:16:05.641 ] 01:16:05.641 } 01:16:05.641 ] 01:16:05.641 } 01:16:05.899 [2024-12-09 05:10:48.134669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:05.899 [2024-12-09 05:10:48.184857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:05.899 [2024-12-09 05:10:48.224651] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:16:05.899  [2024-12-09T05:10:48.611Z] Copying: 1024/1024 [kB] (average 500 MBps) 01:16:06.155 01:16:06.155 05:10:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 01:16:06.155 05:10:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 01:16:06.155 05:10:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 01:16:06.155 05:10:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 01:16:06.155 05:10:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 01:16:06.155 05:10:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 01:16:06.155 05:10:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 01:16:06.155 05:10:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 01:16:06.721 05:10:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 01:16:06.721 05:10:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 01:16:06.721 05:10:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 01:16:06.721 05:10:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 01:16:06.721 [2024-12-09 05:10:48.932273] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:16:06.721 [2024-12-09 05:10:48.932834] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60022 ] 01:16:06.721 { 01:16:06.721 "subsystems": [ 01:16:06.721 { 01:16:06.721 "subsystem": "bdev", 01:16:06.721 "config": [ 01:16:06.721 { 01:16:06.721 "params": { 01:16:06.721 "trtype": "pcie", 01:16:06.721 "traddr": "0000:00:10.0", 01:16:06.721 "name": "Nvme0" 01:16:06.721 }, 01:16:06.721 "method": "bdev_nvme_attach_controller" 01:16:06.721 }, 01:16:06.721 { 01:16:06.721 "method": "bdev_wait_for_examine" 01:16:06.721 } 01:16:06.721 ] 01:16:06.721 } 01:16:06.721 ] 01:16:06.721 } 01:16:06.721 [2024-12-09 05:10:49.066374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:06.721 [2024-12-09 05:10:49.115333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:06.721 [2024-12-09 05:10:49.157365] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:16:06.979  [2024-12-09T05:10:49.693Z] Copying: 48/48 [kB] (average 46 MBps) 01:16:07.237 01:16:07.237 05:10:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 01:16:07.237 05:10:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 01:16:07.237 05:10:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 01:16:07.237 05:10:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 01:16:07.237 [2024-12-09 05:10:49.523621] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:16:07.237 [2024-12-09 05:10:49.523692] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60035 ] 01:16:07.237 { 01:16:07.237 "subsystems": [ 01:16:07.237 { 01:16:07.237 "subsystem": "bdev", 01:16:07.237 "config": [ 01:16:07.237 { 01:16:07.237 "params": { 01:16:07.237 "trtype": "pcie", 01:16:07.237 "traddr": "0000:00:10.0", 01:16:07.237 "name": "Nvme0" 01:16:07.237 }, 01:16:07.237 "method": "bdev_nvme_attach_controller" 01:16:07.237 }, 01:16:07.237 { 01:16:07.237 "method": "bdev_wait_for_examine" 01:16:07.237 } 01:16:07.237 ] 01:16:07.237 } 01:16:07.237 ] 01:16:07.237 } 01:16:07.238 [2024-12-09 05:10:49.675947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:07.496 [2024-12-09 05:10:49.727142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:07.496 [2024-12-09 05:10:49.768732] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:16:07.496  [2024-12-09T05:10:50.211Z] Copying: 48/48 [kB] (average 46 MBps) 01:16:07.755 01:16:07.755 05:10:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:16:07.755 05:10:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 01:16:07.755 05:10:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 01:16:07.755 05:10:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 01:16:07.755 05:10:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 01:16:07.755 05:10:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 01:16:07.755 05:10:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 01:16:07.755 05:10:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 01:16:07.755 05:10:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 01:16:07.755 05:10:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 01:16:07.755 05:10:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 01:16:07.755 [2024-12-09 05:10:50.129308] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:16:07.755 [2024-12-09 05:10:50.129399] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60051 ] 01:16:07.755 { 01:16:07.755 "subsystems": [ 01:16:07.755 { 01:16:07.755 "subsystem": "bdev", 01:16:07.755 "config": [ 01:16:07.755 { 01:16:07.755 "params": { 01:16:07.755 "trtype": "pcie", 01:16:07.755 "traddr": "0000:00:10.0", 01:16:07.755 "name": "Nvme0" 01:16:07.755 }, 01:16:07.755 "method": "bdev_nvme_attach_controller" 01:16:07.755 }, 01:16:07.755 { 01:16:07.755 "method": "bdev_wait_for_examine" 01:16:07.755 } 01:16:07.755 ] 01:16:07.755 } 01:16:07.755 ] 01:16:07.755 } 01:16:08.014 [2024-12-09 05:10:50.282545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:08.014 [2024-12-09 05:10:50.336246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:08.014 [2024-12-09 05:10:50.376750] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:16:08.272  [2024-12-09T05:10:50.728Z] Copying: 1024/1024 [kB] (average 1000 MBps) 01:16:08.272 01:16:08.272 05:10:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 01:16:08.272 05:10:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 01:16:08.272 05:10:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 01:16:08.272 05:10:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 01:16:08.272 05:10:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 01:16:08.272 05:10:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 01:16:08.272 05:10:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 01:16:08.838 05:10:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 01:16:08.838 05:10:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 01:16:08.838 05:10:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 01:16:08.838 05:10:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 01:16:08.838 [2024-12-09 05:10:51.063121] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:16:08.838 [2024-12-09 05:10:51.063280] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60070 ] 01:16:08.838 { 01:16:08.838 "subsystems": [ 01:16:08.838 { 01:16:08.838 "subsystem": "bdev", 01:16:08.838 "config": [ 01:16:08.838 { 01:16:08.838 "params": { 01:16:08.838 "trtype": "pcie", 01:16:08.838 "traddr": "0000:00:10.0", 01:16:08.838 "name": "Nvme0" 01:16:08.838 }, 01:16:08.838 "method": "bdev_nvme_attach_controller" 01:16:08.838 }, 01:16:08.838 { 01:16:08.838 "method": "bdev_wait_for_examine" 01:16:08.838 } 01:16:08.838 ] 01:16:08.838 } 01:16:08.838 ] 01:16:08.838 } 01:16:08.838 [2024-12-09 05:10:51.200516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:08.838 [2024-12-09 05:10:51.249514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:08.838 [2024-12-09 05:10:51.289922] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:16:09.097  [2024-12-09T05:10:51.812Z] Copying: 48/48 [kB] (average 46 MBps) 01:16:09.356 01:16:09.356 05:10:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 01:16:09.356 05:10:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 01:16:09.356 05:10:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 01:16:09.356 05:10:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 01:16:09.356 [2024-12-09 05:10:51.643505] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:16:09.356 [2024-12-09 05:10:51.643661] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60083 ] 01:16:09.356 { 01:16:09.356 "subsystems": [ 01:16:09.356 { 01:16:09.356 "subsystem": "bdev", 01:16:09.356 "config": [ 01:16:09.356 { 01:16:09.356 "params": { 01:16:09.356 "trtype": "pcie", 01:16:09.356 "traddr": "0000:00:10.0", 01:16:09.356 "name": "Nvme0" 01:16:09.356 }, 01:16:09.356 "method": "bdev_nvme_attach_controller" 01:16:09.356 }, 01:16:09.356 { 01:16:09.356 "method": "bdev_wait_for_examine" 01:16:09.356 } 01:16:09.356 ] 01:16:09.356 } 01:16:09.356 ] 01:16:09.356 } 01:16:09.356 [2024-12-09 05:10:51.792291] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:09.616 [2024-12-09 05:10:51.840058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:09.616 [2024-12-09 05:10:51.880123] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:16:09.616  [2024-12-09T05:10:52.332Z] Copying: 48/48 [kB] (average 46 MBps) 01:16:09.876 01:16:09.876 05:10:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:16:09.876 05:10:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 01:16:09.876 05:10:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 01:16:09.876 05:10:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 01:16:09.876 05:10:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 01:16:09.876 05:10:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 01:16:09.876 05:10:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 01:16:09.876 05:10:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 01:16:09.876 05:10:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 01:16:09.876 05:10:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 01:16:09.876 05:10:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 01:16:09.876 { 01:16:09.876 "subsystems": [ 01:16:09.876 { 01:16:09.876 "subsystem": "bdev", 01:16:09.876 "config": [ 01:16:09.876 { 01:16:09.876 "params": { 01:16:09.876 "trtype": "pcie", 01:16:09.876 "traddr": "0000:00:10.0", 01:16:09.876 "name": "Nvme0" 01:16:09.876 }, 01:16:09.876 "method": "bdev_nvme_attach_controller" 01:16:09.876 }, 01:16:09.876 { 01:16:09.876 "method": "bdev_wait_for_examine" 01:16:09.876 } 01:16:09.876 ] 01:16:09.876 } 01:16:09.876 ] 01:16:09.876 } 01:16:09.876 [2024-12-09 05:10:52.238166] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:16:09.876 [2024-12-09 05:10:52.238233] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60099 ] 01:16:10.135 [2024-12-09 05:10:52.388805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:10.135 [2024-12-09 05:10:52.439693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:10.135 [2024-12-09 05:10:52.480039] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:16:10.135  [2024-12-09T05:10:52.850Z] Copying: 1024/1024 [kB] (average 1000 MBps) 01:16:10.394 01:16:10.394 ************************************ 01:16:10.394 END TEST dd_rw 01:16:10.394 ************************************ 01:16:10.394 01:16:10.395 real 0m13.152s 01:16:10.395 user 0m9.680s 01:16:10.395 sys 0m4.573s 01:16:10.395 05:10:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 01:16:10.395 05:10:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 01:16:10.395 05:10:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 01:16:10.395 05:10:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:16:10.395 05:10:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 01:16:10.395 05:10:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 01:16:10.395 ************************************ 01:16:10.395 START TEST dd_rw_offset 01:16:10.395 ************************************ 01:16:10.395 05:10:52 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1129 -- # basic_offset 01:16:10.395 05:10:52 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 01:16:10.395 05:10:52 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 01:16:10.395 05:10:52 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 01:16:10.395 05:10:52 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 01:16:10.654 05:10:52 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 01:16:10.655 05:10:52 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=lb2nz9pxig6cm7ep1eg0hl6l56hg4k4h6jles6oh3fbl8lvi0382m9f3qrwfla20obg4yi1ce9mshjwjypaeu99cleth6jtwt9gx0d86tk9m83bo3xi6pkm6kdp5l9xk3kzgmuv0wrrvd4pe1bjj92gsggbbfcyfpa2i8omtecsf2vw14msrb2ooahlrmvk6qhiirozbhnecgu6llbk3r0wetf7m3iluqfhlf1waj9u7y9ev5lp8in21bg6jgiycdv8dd2x4b9xj5hfkjc8qw8069dflpq4j09m8pny281hdn87l2mkigewqtw0alt7chi1kyci5iu9b1q5ko4j9iht4tu9czrq0oiyk2p3n80uk5w3kbybllni7i1h5wzqbzlxevf65f3axmzgd8mq6ef06al1rs1ffcgopenb7qwic58qego7v6cmdis3rob4dgdz7ds8mtsn0sq8iitiurm4qivqz0ka912zgq8h1ixle6glfp90jgt47uanpw2alk07q5et01xc6pl1778hdr2pggrbntaf7913n6h9d6pp5v5mhkqyq5fz39qm8hmukklgrqbuehcgwpstjf51uio317s48vv6w64lbl3butwea5z57aucv1lamvyzg5z3vvui8oeauu45km1ed4rsmf27uw7q05gckqjzvlubxyqss9dy1de4274jvnc662mk37znkp7sk99bcyok1hu8qvk3dgzyfx14uzn5sk8gxwargcd1j6nocp3lzp0slxh6pu13829plmyxexcmuele3u41ron9xyk5fbqxvx3iu1fqcj5fkur662x9k9fnjpaaf1mqbp7z246xobcwyxzbwzbgtt8x2un8vgkfo4rklarlnqi1em04x15b5z4i6rtee3ln2c386fip4jl60wny0dph0i0zmw6koh5ljmlituto3bx00cf07nwajypy82r8bwdu4pjxf0sa7ojqv3sjgpqwma01nkou62v8s1mgpu0etsi3ijpa36fcek3pyeplsu8qg100yw1899x0ctujpf22i4ermqskpxfdc6gaf67z987m7b77lk9s3rg9wm1yohxaar53hbi5vqtpg8iolrrzrdlng5j0ujty2j0xhvq3g764avzrppybwhrcuq7mjbjqjfmi2pztl173uimgq91qu2i9rt4pcebnhmwijz6op43dh83n6cdhnp6glwr93zkxnbv9s4dngjc675ylq6r06iy7ghhlbayz0jxby7lr85k3v0it2jhdr9ue7crhcfv0j68o87lq7rydwwz5f9yi2hsagau7eqsth4e4kb8dspl2vo9i4t8dbxyzmjyod7nsysc1tj948p2pvxeb39kwcwnbrmrq248vgh8a05y6n6li3y4ederpg8wpjwngbwty50c7jsozhat6eqdfrvxc2h5bvxtclmqt0hhy1d9nsiou9mu3e5bvncgbcca99izc6hewk2mmcmkcgpmu9pj6hvfwps6bmvvsp62ttysxlh5auqm33t6sc12pop39k1mktyrdveqz3hyfj5y24500jz8g2h4ism30uahfzgdjgnmvupnzkkzehfpzybidk8r000cg25fb7gbzx9qb6w04e5uxb1bqprrugltvti0ruyh1lbkq4bwlvfgcttarcdige7kiysrc2ykwfge9p1g6qnra3abxc6v35jw98uh74qt9rxkkg3wpp884tetcj753z7nxl69rb78hsgsx3wb2lei66bfkxzrwzif85t57c0yv2g3p86woxtxjactc5t5za4a5o5tc6th76oqkzi3infe4bawo2gfu5t9mk4vlztsauwa09taojwwebttsfkwvxv6ecjwxnt24yedthjwd7uzwswhd5mkphx9yif8njxkuqc4junyrk0neqfxg57aa6akdepq189zp4nu58aepxbc2o9yz6tu7e8w1f3n2ilcvzrhpahjpsh3zywptuugmq9vawm7bh6gm143gipntk5xqv8v5n0jbnln0pa9i99fgayg1wnw0hc8pq6jq34566h3tmqr8soxe841vivqez50e1pq7mxz3xxmtvf9z9eenfq3mlrz0o9l0apzinkw7dmibzja63ahlg1o6i5u4529uodd7eq5kvdz9l04hb1wmleg3tasbyzgoj45n7wi0fbbh6xt5m1ao1y3lwwc11la7130hv6xzsw6rbo624x0v33ocuuldnhkt7pi4ao9kr0iss5rrib80p2uukkswz1e7g6shbmuc3np8l6tqs2sype8ma03w6ys12pu8mma05edjcxwxuacr94tnaii0s88dcx2vcrlogakxutkos0us4e0q4ofog920ofabd73vxub98jboqhgwrjm44zldyqi1y89f6hgltvcrm1kx87tyx7a65kectomg4lwchbgwhk1bhtdna05rz3f1zeevdy4fy20308f1krpej6p4vfxacde9rowzlvd8941dpsxnr993r0n7wwqi7ncsvgtgpmqe238l66tnri70bn4zcuxdjfaqblwwrqrh6djpmqgu2irhqb94c13patw2dc63lmx1rl38kriy0m8nrn5b9pj948i96jl72r456gypgbsd85757n3km3tp2yu2fuwgotovf3etr9e0yq8z05mrx5tcx1a2ips6xv65t2wfma9sl72p8zme5ym49wt2npix3akpyf1k0jmbq7yy9knt5m8ridu8zu7hrb7x3chywyjw4yf51kjszkh64slntxrr371dohmq7tovd2vrekt2kr7epbz3g23gl642i3njqkc9as0winq6jp5528zjj5on17tywn9guyzt9q5038oil6txrxaj92d0y5wbn98r5grhxeb0zgdb5awv5ovucwhna2m9w8qd9vt5u2bsdm5pk0f1z1599ykrb0ztqrxz2inl0uybj1b6xtvbqcuzmm4qyjgs0ylbgy4tcs208a5txq3hkxuopdnvgfq9jdosjm53620pptazkzxcuafbgck0ugp6g3h9em4h9ieff1cn7utrltoinfw0xd7difzx5lhyna40l67g2isqk72d97wqt1roougy241gwqkbg00m29wevbsqlceg2hp0yz5odcpw25pa9eiilv098g8hxndbykjelekkf81gfebo01xjj92ejdfat86e0mu6ywdd1g4rhzakjzc544oj2g1e7pvt4j4c0ewedbrn655x7nppu3e42up1gqnasc6qpb50sp5cx46u26xqyhknay6hv79ns8j0psggbawvgmnri1viplppcqeav4q22h0vwh5q37l42rrh076r4vxve51g3gonigoy0hfzk646ixu63z2neln1wcw2qolu9qpwy0zpnbqgj5ullcy3o4b3m7boyscsnd15n82lez0e9dt5bh57fzh3rrh3v2dinr555jt0759w0f0cncbwyqzs4yaq5bdfws2ucv5vu0yscdri3x7tm61s4kthtw6kuq5luoalx0wdzrvwhdf2ejdh3322a9ehm1d1u9jo0ja09xtny9lqn1ah4cd7872i1i8wzg9v9meuc8nod02bkhi2ayulazebpvua8jdgi8uv99wcd0r2zjdugg8e6m5daerf8pyq4ta90ir3ejalpv1sy5h5xw5w1ex8jk15nbxmurjqqlay77qqwq186rac055ro9tan3f6v9zxuaehz0zd5kzh1iejsn5q6t83bdbnjg32684txidphk5wtw3kgilhlty08wrh0dbfva3bsrbb6zr5nuctrfeh1jzrcvs69z3lfnsrlgaeqotpr9zchap9olrggm3tmg8aiy485kdnmwoxazdg7gllo9bbn3posxckztkja9olbnpgiyhxt9ct4xva681ajbck4flldlj2jk4yxny7rkutnnul8iqlkm1ji9nnk4czdwse9wqygvi55546md8h88lzp9m40hmwjrxhakrth7x0efo03cj4v19li4mm1bnfalf3oe89yjqus5mbtpesqtcemn7xsgialuqhy1o8zu74yximiovrovlo2zujeaq9budx62ihvy9sanz4h 01:16:10.655 05:10:52 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 01:16:10.655 05:10:52 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 01:16:10.655 05:10:52 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 01:16:10.655 05:10:52 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 01:16:10.655 [2024-12-09 05:10:52.933046] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:16:10.655 [2024-12-09 05:10:52.933101] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60135 ] 01:16:10.655 { 01:16:10.655 "subsystems": [ 01:16:10.655 { 01:16:10.655 "subsystem": "bdev", 01:16:10.655 "config": [ 01:16:10.655 { 01:16:10.655 "params": { 01:16:10.655 "trtype": "pcie", 01:16:10.655 "traddr": "0000:00:10.0", 01:16:10.655 "name": "Nvme0" 01:16:10.655 }, 01:16:10.655 "method": "bdev_nvme_attach_controller" 01:16:10.655 }, 01:16:10.655 { 01:16:10.655 "method": "bdev_wait_for_examine" 01:16:10.655 } 01:16:10.655 ] 01:16:10.655 } 01:16:10.655 ] 01:16:10.655 } 01:16:10.655 [2024-12-09 05:10:53.084843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:10.915 [2024-12-09 05:10:53.130319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:10.915 [2024-12-09 05:10:53.170362] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:16:10.915  [2024-12-09T05:10:53.631Z] Copying: 4096/4096 [B] (average 4000 kBps) 01:16:11.175 01:16:11.175 05:10:53 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 01:16:11.175 05:10:53 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 01:16:11.175 05:10:53 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 01:16:11.175 05:10:53 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 01:16:11.175 { 01:16:11.175 "subsystems": [ 01:16:11.175 { 01:16:11.175 "subsystem": "bdev", 01:16:11.175 "config": [ 01:16:11.175 { 01:16:11.175 "params": { 01:16:11.175 "trtype": "pcie", 01:16:11.175 "traddr": "0000:00:10.0", 01:16:11.175 "name": "Nvme0" 01:16:11.175 }, 01:16:11.175 "method": "bdev_nvme_attach_controller" 01:16:11.175 }, 01:16:11.175 { 01:16:11.175 "method": "bdev_wait_for_examine" 01:16:11.175 } 01:16:11.175 ] 01:16:11.175 } 01:16:11.175 ] 01:16:11.175 } 01:16:11.175 [2024-12-09 05:10:53.523335] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:16:11.175 [2024-12-09 05:10:53.523412] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60143 ] 01:16:11.435 [2024-12-09 05:10:53.674819] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:11.435 [2024-12-09 05:10:53.736997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:11.435 [2024-12-09 05:10:53.789103] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:16:11.695  [2024-12-09T05:10:54.151Z] Copying: 4096/4096 [B] (average 4000 kBps) 01:16:11.695 01:16:11.695 05:10:54 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 01:16:11.696 05:10:54 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ lb2nz9pxig6cm7ep1eg0hl6l56hg4k4h6jles6oh3fbl8lvi0382m9f3qrwfla20obg4yi1ce9mshjwjypaeu99cleth6jtwt9gx0d86tk9m83bo3xi6pkm6kdp5l9xk3kzgmuv0wrrvd4pe1bjj92gsggbbfcyfpa2i8omtecsf2vw14msrb2ooahlrmvk6qhiirozbhnecgu6llbk3r0wetf7m3iluqfhlf1waj9u7y9ev5lp8in21bg6jgiycdv8dd2x4b9xj5hfkjc8qw8069dflpq4j09m8pny281hdn87l2mkigewqtw0alt7chi1kyci5iu9b1q5ko4j9iht4tu9czrq0oiyk2p3n80uk5w3kbybllni7i1h5wzqbzlxevf65f3axmzgd8mq6ef06al1rs1ffcgopenb7qwic58qego7v6cmdis3rob4dgdz7ds8mtsn0sq8iitiurm4qivqz0ka912zgq8h1ixle6glfp90jgt47uanpw2alk07q5et01xc6pl1778hdr2pggrbntaf7913n6h9d6pp5v5mhkqyq5fz39qm8hmukklgrqbuehcgwpstjf51uio317s48vv6w64lbl3butwea5z57aucv1lamvyzg5z3vvui8oeauu45km1ed4rsmf27uw7q05gckqjzvlubxyqss9dy1de4274jvnc662mk37znkp7sk99bcyok1hu8qvk3dgzyfx14uzn5sk8gxwargcd1j6nocp3lzp0slxh6pu13829plmyxexcmuele3u41ron9xyk5fbqxvx3iu1fqcj5fkur662x9k9fnjpaaf1mqbp7z246xobcwyxzbwzbgtt8x2un8vgkfo4rklarlnqi1em04x15b5z4i6rtee3ln2c386fip4jl60wny0dph0i0zmw6koh5ljmlituto3bx00cf07nwajypy82r8bwdu4pjxf0sa7ojqv3sjgpqwma01nkou62v8s1mgpu0etsi3ijpa36fcek3pyeplsu8qg100yw1899x0ctujpf22i4ermqskpxfdc6gaf67z987m7b77lk9s3rg9wm1yohxaar53hbi5vqtpg8iolrrzrdlng5j0ujty2j0xhvq3g764avzrppybwhrcuq7mjbjqjfmi2pztl173uimgq91qu2i9rt4pcebnhmwijz6op43dh83n6cdhnp6glwr93zkxnbv9s4dngjc675ylq6r06iy7ghhlbayz0jxby7lr85k3v0it2jhdr9ue7crhcfv0j68o87lq7rydwwz5f9yi2hsagau7eqsth4e4kb8dspl2vo9i4t8dbxyzmjyod7nsysc1tj948p2pvxeb39kwcwnbrmrq248vgh8a05y6n6li3y4ederpg8wpjwngbwty50c7jsozhat6eqdfrvxc2h5bvxtclmqt0hhy1d9nsiou9mu3e5bvncgbcca99izc6hewk2mmcmkcgpmu9pj6hvfwps6bmvvsp62ttysxlh5auqm33t6sc12pop39k1mktyrdveqz3hyfj5y24500jz8g2h4ism30uahfzgdjgnmvupnzkkzehfpzybidk8r000cg25fb7gbzx9qb6w04e5uxb1bqprrugltvti0ruyh1lbkq4bwlvfgcttarcdige7kiysrc2ykwfge9p1g6qnra3abxc6v35jw98uh74qt9rxkkg3wpp884tetcj753z7nxl69rb78hsgsx3wb2lei66bfkxzrwzif85t57c0yv2g3p86woxtxjactc5t5za4a5o5tc6th76oqkzi3infe4bawo2gfu5t9mk4vlztsauwa09taojwwebttsfkwvxv6ecjwxnt24yedthjwd7uzwswhd5mkphx9yif8njxkuqc4junyrk0neqfxg57aa6akdepq189zp4nu58aepxbc2o9yz6tu7e8w1f3n2ilcvzrhpahjpsh3zywptuugmq9vawm7bh6gm143gipntk5xqv8v5n0jbnln0pa9i99fgayg1wnw0hc8pq6jq34566h3tmqr8soxe841vivqez50e1pq7mxz3xxmtvf9z9eenfq3mlrz0o9l0apzinkw7dmibzja63ahlg1o6i5u4529uodd7eq5kvdz9l04hb1wmleg3tasbyzgoj45n7wi0fbbh6xt5m1ao1y3lwwc11la7130hv6xzsw6rbo624x0v33ocuuldnhkt7pi4ao9kr0iss5rrib80p2uukkswz1e7g6shbmuc3np8l6tqs2sype8ma03w6ys12pu8mma05edjcxwxuacr94tnaii0s88dcx2vcrlogakxutkos0us4e0q4ofog920ofabd73vxub98jboqhgwrjm44zldyqi1y89f6hgltvcrm1kx87tyx7a65kectomg4lwchbgwhk1bhtdna05rz3f1zeevdy4fy20308f1krpej6p4vfxacde9rowzlvd8941dpsxnr993r0n7wwqi7ncsvgtgpmqe238l66tnri70bn4zcuxdjfaqblwwrqrh6djpmqgu2irhqb94c13patw2dc63lmx1rl38kriy0m8nrn5b9pj948i96jl72r456gypgbsd85757n3km3tp2yu2fuwgotovf3etr9e0yq8z05mrx5tcx1a2ips6xv65t2wfma9sl72p8zme5ym49wt2npix3akpyf1k0jmbq7yy9knt5m8ridu8zu7hrb7x3chywyjw4yf51kjszkh64slntxrr371dohmq7tovd2vrekt2kr7epbz3g23gl642i3njqkc9as0winq6jp5528zjj5on17tywn9guyzt9q5038oil6txrxaj92d0y5wbn98r5grhxeb0zgdb5awv5ovucwhna2m9w8qd9vt5u2bsdm5pk0f1z1599ykrb0ztqrxz2inl0uybj1b6xtvbqcuzmm4qyjgs0ylbgy4tcs208a5txq3hkxuopdnvgfq9jdosjm53620pptazkzxcuafbgck0ugp6g3h9em4h9ieff1cn7utrltoinfw0xd7difzx5lhyna40l67g2isqk72d97wqt1roougy241gwqkbg00m29wevbsqlceg2hp0yz5odcpw25pa9eiilv098g8hxndbykjelekkf81gfebo01xjj92ejdfat86e0mu6ywdd1g4rhzakjzc544oj2g1e7pvt4j4c0ewedbrn655x7nppu3e42up1gqnasc6qpb50sp5cx46u26xqyhknay6hv79ns8j0psggbawvgmnri1viplppcqeav4q22h0vwh5q37l42rrh076r4vxve51g3gonigoy0hfzk646ixu63z2neln1wcw2qolu9qpwy0zpnbqgj5ullcy3o4b3m7boyscsnd15n82lez0e9dt5bh57fzh3rrh3v2dinr555jt0759w0f0cncbwyqzs4yaq5bdfws2ucv5vu0yscdri3x7tm61s4kthtw6kuq5luoalx0wdzrvwhdf2ejdh3322a9ehm1d1u9jo0ja09xtny9lqn1ah4cd7872i1i8wzg9v9meuc8nod02bkhi2ayulazebpvua8jdgi8uv99wcd0r2zjdugg8e6m5daerf8pyq4ta90ir3ejalpv1sy5h5xw5w1ex8jk15nbxmurjqqlay77qqwq186rac055ro9tan3f6v9zxuaehz0zd5kzh1iejsn5q6t83bdbnjg32684txidphk5wtw3kgilhlty08wrh0dbfva3bsrbb6zr5nuctrfeh1jzrcvs69z3lfnsrlgaeqotpr9zchap9olrggm3tmg8aiy485kdnmwoxazdg7gllo9bbn3posxckztkja9olbnpgiyhxt9ct4xva681ajbck4flldlj2jk4yxny7rkutnnul8iqlkm1ji9nnk4czdwse9wqygvi55546md8h88lzp9m40hmwjrxhakrth7x0efo03cj4v19li4mm1bnfalf3oe89yjqus5mbtpesqtcemn7xsgialuqhy1o8zu74yximiovrovlo2zujeaq9budx62ihvy9sanz4h == \l\b\2\n\z\9\p\x\i\g\6\c\m\7\e\p\1\e\g\0\h\l\6\l\5\6\h\g\4\k\4\h\6\j\l\e\s\6\o\h\3\f\b\l\8\l\v\i\0\3\8\2\m\9\f\3\q\r\w\f\l\a\2\0\o\b\g\4\y\i\1\c\e\9\m\s\h\j\w\j\y\p\a\e\u\9\9\c\l\e\t\h\6\j\t\w\t\9\g\x\0\d\8\6\t\k\9\m\8\3\b\o\3\x\i\6\p\k\m\6\k\d\p\5\l\9\x\k\3\k\z\g\m\u\v\0\w\r\r\v\d\4\p\e\1\b\j\j\9\2\g\s\g\g\b\b\f\c\y\f\p\a\2\i\8\o\m\t\e\c\s\f\2\v\w\1\4\m\s\r\b\2\o\o\a\h\l\r\m\v\k\6\q\h\i\i\r\o\z\b\h\n\e\c\g\u\6\l\l\b\k\3\r\0\w\e\t\f\7\m\3\i\l\u\q\f\h\l\f\1\w\a\j\9\u\7\y\9\e\v\5\l\p\8\i\n\2\1\b\g\6\j\g\i\y\c\d\v\8\d\d\2\x\4\b\9\x\j\5\h\f\k\j\c\8\q\w\8\0\6\9\d\f\l\p\q\4\j\0\9\m\8\p\n\y\2\8\1\h\d\n\8\7\l\2\m\k\i\g\e\w\q\t\w\0\a\l\t\7\c\h\i\1\k\y\c\i\5\i\u\9\b\1\q\5\k\o\4\j\9\i\h\t\4\t\u\9\c\z\r\q\0\o\i\y\k\2\p\3\n\8\0\u\k\5\w\3\k\b\y\b\l\l\n\i\7\i\1\h\5\w\z\q\b\z\l\x\e\v\f\6\5\f\3\a\x\m\z\g\d\8\m\q\6\e\f\0\6\a\l\1\r\s\1\f\f\c\g\o\p\e\n\b\7\q\w\i\c\5\8\q\e\g\o\7\v\6\c\m\d\i\s\3\r\o\b\4\d\g\d\z\7\d\s\8\m\t\s\n\0\s\q\8\i\i\t\i\u\r\m\4\q\i\v\q\z\0\k\a\9\1\2\z\g\q\8\h\1\i\x\l\e\6\g\l\f\p\9\0\j\g\t\4\7\u\a\n\p\w\2\a\l\k\0\7\q\5\e\t\0\1\x\c\6\p\l\1\7\7\8\h\d\r\2\p\g\g\r\b\n\t\a\f\7\9\1\3\n\6\h\9\d\6\p\p\5\v\5\m\h\k\q\y\q\5\f\z\3\9\q\m\8\h\m\u\k\k\l\g\r\q\b\u\e\h\c\g\w\p\s\t\j\f\5\1\u\i\o\3\1\7\s\4\8\v\v\6\w\6\4\l\b\l\3\b\u\t\w\e\a\5\z\5\7\a\u\c\v\1\l\a\m\v\y\z\g\5\z\3\v\v\u\i\8\o\e\a\u\u\4\5\k\m\1\e\d\4\r\s\m\f\2\7\u\w\7\q\0\5\g\c\k\q\j\z\v\l\u\b\x\y\q\s\s\9\d\y\1\d\e\4\2\7\4\j\v\n\c\6\6\2\m\k\3\7\z\n\k\p\7\s\k\9\9\b\c\y\o\k\1\h\u\8\q\v\k\3\d\g\z\y\f\x\1\4\u\z\n\5\s\k\8\g\x\w\a\r\g\c\d\1\j\6\n\o\c\p\3\l\z\p\0\s\l\x\h\6\p\u\1\3\8\2\9\p\l\m\y\x\e\x\c\m\u\e\l\e\3\u\4\1\r\o\n\9\x\y\k\5\f\b\q\x\v\x\3\i\u\1\f\q\c\j\5\f\k\u\r\6\6\2\x\9\k\9\f\n\j\p\a\a\f\1\m\q\b\p\7\z\2\4\6\x\o\b\c\w\y\x\z\b\w\z\b\g\t\t\8\x\2\u\n\8\v\g\k\f\o\4\r\k\l\a\r\l\n\q\i\1\e\m\0\4\x\1\5\b\5\z\4\i\6\r\t\e\e\3\l\n\2\c\3\8\6\f\i\p\4\j\l\6\0\w\n\y\0\d\p\h\0\i\0\z\m\w\6\k\o\h\5\l\j\m\l\i\t\u\t\o\3\b\x\0\0\c\f\0\7\n\w\a\j\y\p\y\8\2\r\8\b\w\d\u\4\p\j\x\f\0\s\a\7\o\j\q\v\3\s\j\g\p\q\w\m\a\0\1\n\k\o\u\6\2\v\8\s\1\m\g\p\u\0\e\t\s\i\3\i\j\p\a\3\6\f\c\e\k\3\p\y\e\p\l\s\u\8\q\g\1\0\0\y\w\1\8\9\9\x\0\c\t\u\j\p\f\2\2\i\4\e\r\m\q\s\k\p\x\f\d\c\6\g\a\f\6\7\z\9\8\7\m\7\b\7\7\l\k\9\s\3\r\g\9\w\m\1\y\o\h\x\a\a\r\5\3\h\b\i\5\v\q\t\p\g\8\i\o\l\r\r\z\r\d\l\n\g\5\j\0\u\j\t\y\2\j\0\x\h\v\q\3\g\7\6\4\a\v\z\r\p\p\y\b\w\h\r\c\u\q\7\m\j\b\j\q\j\f\m\i\2\p\z\t\l\1\7\3\u\i\m\g\q\9\1\q\u\2\i\9\r\t\4\p\c\e\b\n\h\m\w\i\j\z\6\o\p\4\3\d\h\8\3\n\6\c\d\h\n\p\6\g\l\w\r\9\3\z\k\x\n\b\v\9\s\4\d\n\g\j\c\6\7\5\y\l\q\6\r\0\6\i\y\7\g\h\h\l\b\a\y\z\0\j\x\b\y\7\l\r\8\5\k\3\v\0\i\t\2\j\h\d\r\9\u\e\7\c\r\h\c\f\v\0\j\6\8\o\8\7\l\q\7\r\y\d\w\w\z\5\f\9\y\i\2\h\s\a\g\a\u\7\e\q\s\t\h\4\e\4\k\b\8\d\s\p\l\2\v\o\9\i\4\t\8\d\b\x\y\z\m\j\y\o\d\7\n\s\y\s\c\1\t\j\9\4\8\p\2\p\v\x\e\b\3\9\k\w\c\w\n\b\r\m\r\q\2\4\8\v\g\h\8\a\0\5\y\6\n\6\l\i\3\y\4\e\d\e\r\p\g\8\w\p\j\w\n\g\b\w\t\y\5\0\c\7\j\s\o\z\h\a\t\6\e\q\d\f\r\v\x\c\2\h\5\b\v\x\t\c\l\m\q\t\0\h\h\y\1\d\9\n\s\i\o\u\9\m\u\3\e\5\b\v\n\c\g\b\c\c\a\9\9\i\z\c\6\h\e\w\k\2\m\m\c\m\k\c\g\p\m\u\9\p\j\6\h\v\f\w\p\s\6\b\m\v\v\s\p\6\2\t\t\y\s\x\l\h\5\a\u\q\m\3\3\t\6\s\c\1\2\p\o\p\3\9\k\1\m\k\t\y\r\d\v\e\q\z\3\h\y\f\j\5\y\2\4\5\0\0\j\z\8\g\2\h\4\i\s\m\3\0\u\a\h\f\z\g\d\j\g\n\m\v\u\p\n\z\k\k\z\e\h\f\p\z\y\b\i\d\k\8\r\0\0\0\c\g\2\5\f\b\7\g\b\z\x\9\q\b\6\w\0\4\e\5\u\x\b\1\b\q\p\r\r\u\g\l\t\v\t\i\0\r\u\y\h\1\l\b\k\q\4\b\w\l\v\f\g\c\t\t\a\r\c\d\i\g\e\7\k\i\y\s\r\c\2\y\k\w\f\g\e\9\p\1\g\6\q\n\r\a\3\a\b\x\c\6\v\3\5\j\w\9\8\u\h\7\4\q\t\9\r\x\k\k\g\3\w\p\p\8\8\4\t\e\t\c\j\7\5\3\z\7\n\x\l\6\9\r\b\7\8\h\s\g\s\x\3\w\b\2\l\e\i\6\6\b\f\k\x\z\r\w\z\i\f\8\5\t\5\7\c\0\y\v\2\g\3\p\8\6\w\o\x\t\x\j\a\c\t\c\5\t\5\z\a\4\a\5\o\5\t\c\6\t\h\7\6\o\q\k\z\i\3\i\n\f\e\4\b\a\w\o\2\g\f\u\5\t\9\m\k\4\v\l\z\t\s\a\u\w\a\0\9\t\a\o\j\w\w\e\b\t\t\s\f\k\w\v\x\v\6\e\c\j\w\x\n\t\2\4\y\e\d\t\h\j\w\d\7\u\z\w\s\w\h\d\5\m\k\p\h\x\9\y\i\f\8\n\j\x\k\u\q\c\4\j\u\n\y\r\k\0\n\e\q\f\x\g\5\7\a\a\6\a\k\d\e\p\q\1\8\9\z\p\4\n\u\5\8\a\e\p\x\b\c\2\o\9\y\z\6\t\u\7\e\8\w\1\f\3\n\2\i\l\c\v\z\r\h\p\a\h\j\p\s\h\3\z\y\w\p\t\u\u\g\m\q\9\v\a\w\m\7\b\h\6\g\m\1\4\3\g\i\p\n\t\k\5\x\q\v\8\v\5\n\0\j\b\n\l\n\0\p\a\9\i\9\9\f\g\a\y\g\1\w\n\w\0\h\c\8\p\q\6\j\q\3\4\5\6\6\h\3\t\m\q\r\8\s\o\x\e\8\4\1\v\i\v\q\e\z\5\0\e\1\p\q\7\m\x\z\3\x\x\m\t\v\f\9\z\9\e\e\n\f\q\3\m\l\r\z\0\o\9\l\0\a\p\z\i\n\k\w\7\d\m\i\b\z\j\a\6\3\a\h\l\g\1\o\6\i\5\u\4\5\2\9\u\o\d\d\7\e\q\5\k\v\d\z\9\l\0\4\h\b\1\w\m\l\e\g\3\t\a\s\b\y\z\g\o\j\4\5\n\7\w\i\0\f\b\b\h\6\x\t\5\m\1\a\o\1\y\3\l\w\w\c\1\1\l\a\7\1\3\0\h\v\6\x\z\s\w\6\r\b\o\6\2\4\x\0\v\3\3\o\c\u\u\l\d\n\h\k\t\7\p\i\4\a\o\9\k\r\0\i\s\s\5\r\r\i\b\8\0\p\2\u\u\k\k\s\w\z\1\e\7\g\6\s\h\b\m\u\c\3\n\p\8\l\6\t\q\s\2\s\y\p\e\8\m\a\0\3\w\6\y\s\1\2\p\u\8\m\m\a\0\5\e\d\j\c\x\w\x\u\a\c\r\9\4\t\n\a\i\i\0\s\8\8\d\c\x\2\v\c\r\l\o\g\a\k\x\u\t\k\o\s\0\u\s\4\e\0\q\4\o\f\o\g\9\2\0\o\f\a\b\d\7\3\v\x\u\b\9\8\j\b\o\q\h\g\w\r\j\m\4\4\z\l\d\y\q\i\1\y\8\9\f\6\h\g\l\t\v\c\r\m\1\k\x\8\7\t\y\x\7\a\6\5\k\e\c\t\o\m\g\4\l\w\c\h\b\g\w\h\k\1\b\h\t\d\n\a\0\5\r\z\3\f\1\z\e\e\v\d\y\4\f\y\2\0\3\0\8\f\1\k\r\p\e\j\6\p\4\v\f\x\a\c\d\e\9\r\o\w\z\l\v\d\8\9\4\1\d\p\s\x\n\r\9\9\3\r\0\n\7\w\w\q\i\7\n\c\s\v\g\t\g\p\m\q\e\2\3\8\l\6\6\t\n\r\i\7\0\b\n\4\z\c\u\x\d\j\f\a\q\b\l\w\w\r\q\r\h\6\d\j\p\m\q\g\u\2\i\r\h\q\b\9\4\c\1\3\p\a\t\w\2\d\c\6\3\l\m\x\1\r\l\3\8\k\r\i\y\0\m\8\n\r\n\5\b\9\p\j\9\4\8\i\9\6\j\l\7\2\r\4\5\6\g\y\p\g\b\s\d\8\5\7\5\7\n\3\k\m\3\t\p\2\y\u\2\f\u\w\g\o\t\o\v\f\3\e\t\r\9\e\0\y\q\8\z\0\5\m\r\x\5\t\c\x\1\a\2\i\p\s\6\x\v\6\5\t\2\w\f\m\a\9\s\l\7\2\p\8\z\m\e\5\y\m\4\9\w\t\2\n\p\i\x\3\a\k\p\y\f\1\k\0\j\m\b\q\7\y\y\9\k\n\t\5\m\8\r\i\d\u\8\z\u\7\h\r\b\7\x\3\c\h\y\w\y\j\w\4\y\f\5\1\k\j\s\z\k\h\6\4\s\l\n\t\x\r\r\3\7\1\d\o\h\m\q\7\t\o\v\d\2\v\r\e\k\t\2\k\r\7\e\p\b\z\3\g\2\3\g\l\6\4\2\i\3\n\j\q\k\c\9\a\s\0\w\i\n\q\6\j\p\5\5\2\8\z\j\j\5\o\n\1\7\t\y\w\n\9\g\u\y\z\t\9\q\5\0\3\8\o\i\l\6\t\x\r\x\a\j\9\2\d\0\y\5\w\b\n\9\8\r\5\g\r\h\x\e\b\0\z\g\d\b\5\a\w\v\5\o\v\u\c\w\h\n\a\2\m\9\w\8\q\d\9\v\t\5\u\2\b\s\d\m\5\p\k\0\f\1\z\1\5\9\9\y\k\r\b\0\z\t\q\r\x\z\2\i\n\l\0\u\y\b\j\1\b\6\x\t\v\b\q\c\u\z\m\m\4\q\y\j\g\s\0\y\l\b\g\y\4\t\c\s\2\0\8\a\5\t\x\q\3\h\k\x\u\o\p\d\n\v\g\f\q\9\j\d\o\s\j\m\5\3\6\2\0\p\p\t\a\z\k\z\x\c\u\a\f\b\g\c\k\0\u\g\p\6\g\3\h\9\e\m\4\h\9\i\e\f\f\1\c\n\7\u\t\r\l\t\o\i\n\f\w\0\x\d\7\d\i\f\z\x\5\l\h\y\n\a\4\0\l\6\7\g\2\i\s\q\k\7\2\d\9\7\w\q\t\1\r\o\o\u\g\y\2\4\1\g\w\q\k\b\g\0\0\m\2\9\w\e\v\b\s\q\l\c\e\g\2\h\p\0\y\z\5\o\d\c\p\w\2\5\p\a\9\e\i\i\l\v\0\9\8\g\8\h\x\n\d\b\y\k\j\e\l\e\k\k\f\8\1\g\f\e\b\o\0\1\x\j\j\9\2\e\j\d\f\a\t\8\6\e\0\m\u\6\y\w\d\d\1\g\4\r\h\z\a\k\j\z\c\5\4\4\o\j\2\g\1\e\7\p\v\t\4\j\4\c\0\e\w\e\d\b\r\n\6\5\5\x\7\n\p\p\u\3\e\4\2\u\p\1\g\q\n\a\s\c\6\q\p\b\5\0\s\p\5\c\x\4\6\u\2\6\x\q\y\h\k\n\a\y\6\h\v\7\9\n\s\8\j\0\p\s\g\g\b\a\w\v\g\m\n\r\i\1\v\i\p\l\p\p\c\q\e\a\v\4\q\2\2\h\0\v\w\h\5\q\3\7\l\4\2\r\r\h\0\7\6\r\4\v\x\v\e\5\1\g\3\g\o\n\i\g\o\y\0\h\f\z\k\6\4\6\i\x\u\6\3\z\2\n\e\l\n\1\w\c\w\2\q\o\l\u\9\q\p\w\y\0\z\p\n\b\q\g\j\5\u\l\l\c\y\3\o\4\b\3\m\7\b\o\y\s\c\s\n\d\1\5\n\8\2\l\e\z\0\e\9\d\t\5\b\h\5\7\f\z\h\3\r\r\h\3\v\2\d\i\n\r\5\5\5\j\t\0\7\5\9\w\0\f\0\c\n\c\b\w\y\q\z\s\4\y\a\q\5\b\d\f\w\s\2\u\c\v\5\v\u\0\y\s\c\d\r\i\3\x\7\t\m\6\1\s\4\k\t\h\t\w\6\k\u\q\5\l\u\o\a\l\x\0\w\d\z\r\v\w\h\d\f\2\e\j\d\h\3\3\2\2\a\9\e\h\m\1\d\1\u\9\j\o\0\j\a\0\9\x\t\n\y\9\l\q\n\1\a\h\4\c\d\7\8\7\2\i\1\i\8\w\z\g\9\v\9\m\e\u\c\8\n\o\d\0\2\b\k\h\i\2\a\y\u\l\a\z\e\b\p\v\u\a\8\j\d\g\i\8\u\v\9\9\w\c\d\0\r\2\z\j\d\u\g\g\8\e\6\m\5\d\a\e\r\f\8\p\y\q\4\t\a\9\0\i\r\3\e\j\a\l\p\v\1\s\y\5\h\5\x\w\5\w\1\e\x\8\j\k\1\5\n\b\x\m\u\r\j\q\q\l\a\y\7\7\q\q\w\q\1\8\6\r\a\c\0\5\5\r\o\9\t\a\n\3\f\6\v\9\z\x\u\a\e\h\z\0\z\d\5\k\z\h\1\i\e\j\s\n\5\q\6\t\8\3\b\d\b\n\j\g\3\2\6\8\4\t\x\i\d\p\h\k\5\w\t\w\3\k\g\i\l\h\l\t\y\0\8\w\r\h\0\d\b\f\v\a\3\b\s\r\b\b\6\z\r\5\n\u\c\t\r\f\e\h\1\j\z\r\c\v\s\6\9\z\3\l\f\n\s\r\l\g\a\e\q\o\t\p\r\9\z\c\h\a\p\9\o\l\r\g\g\m\3\t\m\g\8\a\i\y\4\8\5\k\d\n\m\w\o\x\a\z\d\g\7\g\l\l\o\9\b\b\n\3\p\o\s\x\c\k\z\t\k\j\a\9\o\l\b\n\p\g\i\y\h\x\t\9\c\t\4\x\v\a\6\8\1\a\j\b\c\k\4\f\l\l\d\l\j\2\j\k\4\y\x\n\y\7\r\k\u\t\n\n\u\l\8\i\q\l\k\m\1\j\i\9\n\n\k\4\c\z\d\w\s\e\9\w\q\y\g\v\i\5\5\5\4\6\m\d\8\h\8\8\l\z\p\9\m\4\0\h\m\w\j\r\x\h\a\k\r\t\h\7\x\0\e\f\o\0\3\c\j\4\v\1\9\l\i\4\m\m\1\b\n\f\a\l\f\3\o\e\8\9\y\j\q\u\s\5\m\b\t\p\e\s\q\t\c\e\m\n\7\x\s\g\i\a\l\u\q\h\y\1\o\8\z\u\7\4\y\x\i\m\i\o\v\r\o\v\l\o\2\z\u\j\e\a\q\9\b\u\d\x\6\2\i\h\v\y\9\s\a\n\z\4\h ]] 01:16:11.696 01:16:11.696 real 0m1.260s 01:16:11.696 user 0m0.875s 01:16:11.696 sys 0m0.515s 01:16:11.696 ************************************ 01:16:11.696 END TEST dd_rw_offset 01:16:11.696 ************************************ 01:16:11.696 05:10:54 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1130 -- # xtrace_disable 01:16:11.696 05:10:54 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 01:16:11.955 05:10:54 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 01:16:11.956 05:10:54 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 01:16:11.956 05:10:54 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 01:16:11.956 05:10:54 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 01:16:11.956 05:10:54 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 01:16:11.956 05:10:54 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 01:16:11.956 05:10:54 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 01:16:11.956 05:10:54 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 01:16:11.956 05:10:54 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 01:16:11.956 05:10:54 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 01:16:11.956 05:10:54 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 01:16:11.956 [2024-12-09 05:10:54.207984] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:16:11.956 [2024-12-09 05:10:54.208053] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60178 ] 01:16:11.956 { 01:16:11.956 "subsystems": [ 01:16:11.956 { 01:16:11.956 "subsystem": "bdev", 01:16:11.956 "config": [ 01:16:11.956 { 01:16:11.956 "params": { 01:16:11.956 "trtype": "pcie", 01:16:11.956 "traddr": "0000:00:10.0", 01:16:11.956 "name": "Nvme0" 01:16:11.956 }, 01:16:11.956 "method": "bdev_nvme_attach_controller" 01:16:11.956 }, 01:16:11.956 { 01:16:11.956 "method": "bdev_wait_for_examine" 01:16:11.956 } 01:16:11.956 ] 01:16:11.956 } 01:16:11.956 ] 01:16:11.956 } 01:16:11.956 [2024-12-09 05:10:54.358628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:11.956 [2024-12-09 05:10:54.408375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:12.215 [2024-12-09 05:10:54.448617] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:16:12.215  [2024-12-09T05:10:54.940Z] Copying: 1024/1024 [kB] (average 500 MBps) 01:16:12.484 01:16:12.484 05:10:54 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:16:12.484 01:16:12.484 real 0m16.290s 01:16:12.484 user 0m11.696s 01:16:12.484 sys 0m5.749s 01:16:12.484 05:10:54 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 01:16:12.484 05:10:54 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 01:16:12.484 ************************************ 01:16:12.484 END TEST spdk_dd_basic_rw 01:16:12.484 ************************************ 01:16:12.484 05:10:54 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 01:16:12.484 05:10:54 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:16:12.484 05:10:54 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 01:16:12.484 05:10:54 spdk_dd -- common/autotest_common.sh@10 -- # set +x 01:16:12.484 ************************************ 01:16:12.484 START TEST spdk_dd_posix 01:16:12.484 ************************************ 01:16:12.484 05:10:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 01:16:12.484 * Looking for test storage... 01:16:12.744 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 01:16:12.744 05:10:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:16:12.744 05:10:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:16:12.744 05:10:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # lcov --version 01:16:12.744 05:10:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:16:12.744 05:10:55 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:16:12.744 05:10:55 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l 01:16:12.744 05:10:55 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l 01:16:12.744 05:10:55 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-: 01:16:12.744 05:10:55 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1 01:16:12.744 05:10:55 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-: 01:16:12.744 05:10:55 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2 01:16:12.744 05:10:55 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<' 01:16:12.744 05:10:55 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2 01:16:12.744 05:10:55 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1 01:16:12.744 05:10:55 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:16:12.744 05:10:55 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in 01:16:12.744 05:10:55 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1 01:16:12.744 05:10:55 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 )) 01:16:12.744 05:10:55 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:16:12.744 05:10:55 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1 01:16:12.744 05:10:55 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1 01:16:12.744 05:10:55 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:16:12.745 05:10:55 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1 01:16:12.745 05:10:55 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1 01:16:12.745 05:10:55 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2 01:16:12.745 05:10:55 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2 01:16:12.745 05:10:55 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:16:12.745 05:10:55 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2 01:16:12.745 05:10:55 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2 01:16:12.745 05:10:55 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:16:12.745 05:10:55 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:16:12.745 05:10:55 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0 01:16:12.745 05:10:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:16:12.745 05:10:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:16:12.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:16:12.745 --rc genhtml_branch_coverage=1 01:16:12.745 --rc genhtml_function_coverage=1 01:16:12.745 --rc genhtml_legend=1 01:16:12.745 --rc geninfo_all_blocks=1 01:16:12.745 --rc geninfo_unexecuted_blocks=1 01:16:12.745 01:16:12.745 ' 01:16:12.745 05:10:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:16:12.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:16:12.745 --rc genhtml_branch_coverage=1 01:16:12.745 --rc genhtml_function_coverage=1 01:16:12.745 --rc genhtml_legend=1 01:16:12.745 --rc geninfo_all_blocks=1 01:16:12.745 --rc geninfo_unexecuted_blocks=1 01:16:12.745 01:16:12.745 ' 01:16:12.745 05:10:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:16:12.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:16:12.745 --rc genhtml_branch_coverage=1 01:16:12.745 --rc genhtml_function_coverage=1 01:16:12.745 --rc genhtml_legend=1 01:16:12.745 --rc geninfo_all_blocks=1 01:16:12.745 --rc geninfo_unexecuted_blocks=1 01:16:12.745 01:16:12.745 ' 01:16:12.745 05:10:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:16:12.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:16:12.745 --rc genhtml_branch_coverage=1 01:16:12.745 --rc genhtml_function_coverage=1 01:16:12.745 --rc genhtml_legend=1 01:16:12.745 --rc geninfo_all_blocks=1 01:16:12.745 --rc geninfo_unexecuted_blocks=1 01:16:12.745 01:16:12.745 ' 01:16:12.745 05:10:55 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:16:12.745 05:10:55 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob 01:16:12.745 05:10:55 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:16:12.745 05:10:55 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:16:12.745 05:10:55 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:16:12.745 05:10:55 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:16:12.745 05:10:55 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:16:12.745 05:10:55 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:16:12.745 05:10:55 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 01:16:12.745 05:10:55 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:16:12.745 05:10:55 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 01:16:12.745 05:10:55 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 01:16:12.745 05:10:55 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 01:16:12.745 05:10:55 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 01:16:12.745 05:10:55 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 01:16:12.745 05:10:55 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:16:12.745 05:10:55 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 01:16:12.745 05:10:55 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 01:16:12.745 * First test run, liburing in use 01:16:12.745 05:10:55 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 01:16:12.745 05:10:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:16:12.745 05:10:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 01:16:12.745 05:10:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 01:16:12.745 ************************************ 01:16:12.745 START TEST dd_flag_append 01:16:12.745 ************************************ 01:16:12.745 05:10:55 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1129 -- # append 01:16:12.745 05:10:55 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 01:16:12.745 05:10:55 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 01:16:12.745 05:10:55 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 01:16:12.745 05:10:55 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 01:16:12.745 05:10:55 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 01:16:12.745 05:10:55 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=o8tztu55wu98p01govszc3cchee10qbz 01:16:12.745 05:10:55 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 01:16:12.745 05:10:55 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 01:16:12.745 05:10:55 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 01:16:12.745 05:10:55 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=qul62yyojd4gxtktqp0gy819h2dxzhaq 01:16:12.745 05:10:55 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s o8tztu55wu98p01govszc3cchee10qbz 01:16:12.745 05:10:55 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s qul62yyojd4gxtktqp0gy819h2dxzhaq 01:16:12.745 05:10:55 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 01:16:12.745 [2024-12-09 05:10:55.128421] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:16:12.745 [2024-12-09 05:10:55.128485] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60250 ] 01:16:13.005 [2024-12-09 05:10:55.281282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:13.005 [2024-12-09 05:10:55.333372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:13.005 [2024-12-09 05:10:55.374171] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:16:13.005  [2024-12-09T05:10:55.720Z] Copying: 32/32 [B] (average 31 kBps) 01:16:13.264 01:16:13.264 ************************************ 01:16:13.264 END TEST dd_flag_append 01:16:13.264 ************************************ 01:16:13.264 05:10:55 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ qul62yyojd4gxtktqp0gy819h2dxzhaqo8tztu55wu98p01govszc3cchee10qbz == \q\u\l\6\2\y\y\o\j\d\4\g\x\t\k\t\q\p\0\g\y\8\1\9\h\2\d\x\z\h\a\q\o\8\t\z\t\u\5\5\w\u\9\8\p\0\1\g\o\v\s\z\c\3\c\c\h\e\e\1\0\q\b\z ]] 01:16:13.264 01:16:13.264 real 0m0.536s 01:16:13.264 user 0m0.295s 01:16:13.264 sys 0m0.241s 01:16:13.264 05:10:55 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1130 -- # xtrace_disable 01:16:13.264 05:10:55 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 01:16:13.264 05:10:55 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 01:16:13.264 05:10:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:16:13.264 05:10:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 01:16:13.264 05:10:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 01:16:13.264 ************************************ 01:16:13.264 START TEST dd_flag_directory 01:16:13.264 ************************************ 01:16:13.264 05:10:55 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1129 -- # directory 01:16:13.264 05:10:55 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 01:16:13.264 05:10:55 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 01:16:13.264 05:10:55 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 01:16:13.264 05:10:55 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:16:13.264 05:10:55 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:16:13.264 05:10:55 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:16:13.264 05:10:55 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:16:13.264 05:10:55 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:16:13.264 05:10:55 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:16:13.264 05:10:55 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:16:13.264 05:10:55 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 01:16:13.264 05:10:55 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 01:16:13.524 [2024-12-09 05:10:55.721644] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:16:13.524 [2024-12-09 05:10:55.721765] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60275 ] 01:16:13.524 [2024-12-09 05:10:55.873182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:13.524 [2024-12-09 05:10:55.928529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:13.524 [2024-12-09 05:10:55.969758] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:16:13.783 [2024-12-09 05:10:56.001097] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 01:16:13.783 [2024-12-09 05:10:56.001153] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 01:16:13.783 [2024-12-09 05:10:56.001166] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:16:13.783 [2024-12-09 05:10:56.095997] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 01:16:13.783 05:10:56 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 01:16:13.783 05:10:56 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:16:13.783 05:10:56 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 01:16:13.783 05:10:56 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 01:16:13.783 05:10:56 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 01:16:13.783 05:10:56 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:16:13.783 05:10:56 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 01:16:13.783 05:10:56 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 01:16:13.783 05:10:56 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 01:16:13.783 05:10:56 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:16:13.783 05:10:56 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:16:13.783 05:10:56 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:16:13.784 05:10:56 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:16:13.784 05:10:56 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:16:13.784 05:10:56 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:16:13.784 05:10:56 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:16:13.784 05:10:56 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 01:16:13.784 05:10:56 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 01:16:14.042 [2024-12-09 05:10:56.255949] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:16:14.042 [2024-12-09 05:10:56.256111] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60290 ] 01:16:14.042 [2024-12-09 05:10:56.405894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:14.042 [2024-12-09 05:10:56.457228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:14.300 [2024-12-09 05:10:56.497417] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:16:14.300 [2024-12-09 05:10:56.527803] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 01:16:14.300 [2024-12-09 05:10:56.527930] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 01:16:14.300 [2024-12-09 05:10:56.527975] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:16:14.300 [2024-12-09 05:10:56.622013] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 01:16:14.300 05:10:56 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 01:16:14.300 05:10:56 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:16:14.300 05:10:56 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 01:16:14.300 05:10:56 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 01:16:14.300 05:10:56 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 01:16:14.300 05:10:56 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:16:14.300 01:16:14.300 real 0m1.058s 01:16:14.300 user 0m0.602s 01:16:14.300 sys 0m0.243s 01:16:14.300 05:10:56 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1130 -- # xtrace_disable 01:16:14.300 05:10:56 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 01:16:14.300 ************************************ 01:16:14.300 END TEST dd_flag_directory 01:16:14.300 ************************************ 01:16:14.559 05:10:56 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 01:16:14.559 05:10:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:16:14.559 05:10:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 01:16:14.559 05:10:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 01:16:14.559 ************************************ 01:16:14.559 START TEST dd_flag_nofollow 01:16:14.559 ************************************ 01:16:14.559 05:10:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1129 -- # nofollow 01:16:14.559 05:10:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 01:16:14.559 05:10:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 01:16:14.559 05:10:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 01:16:14.559 05:10:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 01:16:14.559 05:10:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:16:14.559 05:10:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 01:16:14.559 05:10:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:16:14.559 05:10:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:16:14.559 05:10:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:16:14.559 05:10:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:16:14.559 05:10:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:16:14.559 05:10:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:16:14.559 05:10:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:16:14.559 05:10:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:16:14.559 05:10:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 01:16:14.559 05:10:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:16:14.559 [2024-12-09 05:10:56.858822] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:16:14.559 [2024-12-09 05:10:56.858896] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60317 ] 01:16:14.560 [2024-12-09 05:10:57.011713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:14.816 [2024-12-09 05:10:57.057498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:14.816 [2024-12-09 05:10:57.097632] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:16:14.816 [2024-12-09 05:10:57.127807] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 01:16:14.816 [2024-12-09 05:10:57.127856] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 01:16:14.816 [2024-12-09 05:10:57.127868] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:16:14.816 [2024-12-09 05:10:57.222577] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 01:16:15.074 05:10:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 01:16:15.074 05:10:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:16:15.074 05:10:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 01:16:15.074 05:10:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 01:16:15.074 05:10:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 01:16:15.074 05:10:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:16:15.074 05:10:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 01:16:15.074 05:10:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 01:16:15.074 05:10:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 01:16:15.074 05:10:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:16:15.074 05:10:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:16:15.074 05:10:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:16:15.074 05:10:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:16:15.074 05:10:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:16:15.074 05:10:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:16:15.074 05:10:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:16:15.074 05:10:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 01:16:15.074 05:10:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 01:16:15.074 [2024-12-09 05:10:57.381306] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:16:15.074 [2024-12-09 05:10:57.381401] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60328 ] 01:16:15.331 [2024-12-09 05:10:57.535799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:15.331 [2024-12-09 05:10:57.586153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:15.331 [2024-12-09 05:10:57.625970] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:16:15.331 [2024-12-09 05:10:57.655793] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 01:16:15.331 [2024-12-09 05:10:57.655839] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 01:16:15.331 [2024-12-09 05:10:57.655851] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:16:15.331 [2024-12-09 05:10:57.749070] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 01:16:15.588 05:10:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 01:16:15.588 05:10:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:16:15.588 05:10:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 01:16:15.588 05:10:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 01:16:15.588 05:10:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 01:16:15.588 05:10:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:16:15.588 05:10:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 01:16:15.588 05:10:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 01:16:15.588 05:10:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 01:16:15.588 05:10:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:16:15.588 [2024-12-09 05:10:57.903507] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:16:15.588 [2024-12-09 05:10:57.903666] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60331 ] 01:16:15.846 [2024-12-09 05:10:58.055020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:15.846 [2024-12-09 05:10:58.102406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:15.846 [2024-12-09 05:10:58.142601] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:16:15.846  [2024-12-09T05:10:58.560Z] Copying: 512/512 [B] (average 500 kBps) 01:16:16.104 01:16:16.104 05:10:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ u3qng9d3w74ftnc1uzzuzdua7unh8d8r1ix7e4bercl2rev0cutdxhg5tev3j5z2ilzh180yihz4j6fm9rgzlvrgdnayl10p8zpxdk3k0hm1qu3k5741tshnsxbzrn8cftldtvh7lon6oz2y4p94c64ymvkp1asbqmc9d5o570o5jihn2c1lykwdmpblcm05dpl2r8e0oh9zlfaqks70ekasmvtbzj8v6r4q4bg9te4jue3a6aun0nlkgvqza6s7vccxzytjws6xydylvsqsqbl9jqbz2xfpxjf8yrojcy62cwbdxsekjsj3ybpg72k7pm06v9d28aid84b8umhpepxsflbz4yfpo4swsw7i74yvi42tgl4icbeyqc6qjxu51xxoh5efzoh3xvx9kvy7kg4xy0fd1o8mqdsx5ib5lxnnm81zo55b4e32zami4aub0xq4gy57i82ku8zlaon6an6z529ty834rlr1m9fclhjkj4fh2g2ulg12cmw8igik == \u\3\q\n\g\9\d\3\w\7\4\f\t\n\c\1\u\z\z\u\z\d\u\a\7\u\n\h\8\d\8\r\1\i\x\7\e\4\b\e\r\c\l\2\r\e\v\0\c\u\t\d\x\h\g\5\t\e\v\3\j\5\z\2\i\l\z\h\1\8\0\y\i\h\z\4\j\6\f\m\9\r\g\z\l\v\r\g\d\n\a\y\l\1\0\p\8\z\p\x\d\k\3\k\0\h\m\1\q\u\3\k\5\7\4\1\t\s\h\n\s\x\b\z\r\n\8\c\f\t\l\d\t\v\h\7\l\o\n\6\o\z\2\y\4\p\9\4\c\6\4\y\m\v\k\p\1\a\s\b\q\m\c\9\d\5\o\5\7\0\o\5\j\i\h\n\2\c\1\l\y\k\w\d\m\p\b\l\c\m\0\5\d\p\l\2\r\8\e\0\o\h\9\z\l\f\a\q\k\s\7\0\e\k\a\s\m\v\t\b\z\j\8\v\6\r\4\q\4\b\g\9\t\e\4\j\u\e\3\a\6\a\u\n\0\n\l\k\g\v\q\z\a\6\s\7\v\c\c\x\z\y\t\j\w\s\6\x\y\d\y\l\v\s\q\s\q\b\l\9\j\q\b\z\2\x\f\p\x\j\f\8\y\r\o\j\c\y\6\2\c\w\b\d\x\s\e\k\j\s\j\3\y\b\p\g\7\2\k\7\p\m\0\6\v\9\d\2\8\a\i\d\8\4\b\8\u\m\h\p\e\p\x\s\f\l\b\z\4\y\f\p\o\4\s\w\s\w\7\i\7\4\y\v\i\4\2\t\g\l\4\i\c\b\e\y\q\c\6\q\j\x\u\5\1\x\x\o\h\5\e\f\z\o\h\3\x\v\x\9\k\v\y\7\k\g\4\x\y\0\f\d\1\o\8\m\q\d\s\x\5\i\b\5\l\x\n\n\m\8\1\z\o\5\5\b\4\e\3\2\z\a\m\i\4\a\u\b\0\x\q\4\g\y\5\7\i\8\2\k\u\8\z\l\a\o\n\6\a\n\6\z\5\2\9\t\y\8\3\4\r\l\r\1\m\9\f\c\l\h\j\k\j\4\f\h\2\g\2\u\l\g\1\2\c\m\w\8\i\g\i\k ]] 01:16:16.104 01:16:16.104 real 0m1.577s 01:16:16.104 user 0m0.919s 01:16:16.104 sys 0m0.449s 01:16:16.104 05:10:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1130 -- # xtrace_disable 01:16:16.104 05:10:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 01:16:16.104 ************************************ 01:16:16.104 END TEST dd_flag_nofollow 01:16:16.104 ************************************ 01:16:16.104 05:10:58 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 01:16:16.104 05:10:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:16:16.104 05:10:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 01:16:16.104 05:10:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 01:16:16.104 ************************************ 01:16:16.104 START TEST dd_flag_noatime 01:16:16.104 ************************************ 01:16:16.104 05:10:58 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1129 -- # noatime 01:16:16.104 05:10:58 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 01:16:16.104 05:10:58 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 01:16:16.104 05:10:58 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 01:16:16.104 05:10:58 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 01:16:16.104 05:10:58 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 01:16:16.104 05:10:58 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 01:16:16.104 05:10:58 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1733721058 01:16:16.104 05:10:58 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:16:16.104 05:10:58 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1733721058 01:16:16.104 05:10:58 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 01:16:17.092 05:10:59 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:16:17.092 [2024-12-09 05:10:59.521670] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:16:17.092 [2024-12-09 05:10:59.521736] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60378 ] 01:16:17.350 [2024-12-09 05:10:59.673790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:17.350 [2024-12-09 05:10:59.728376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:17.350 [2024-12-09 05:10:59.768399] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:16:17.350  [2024-12-09T05:11:00.063Z] Copying: 512/512 [B] (average 500 kBps) 01:16:17.607 01:16:17.607 05:10:59 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 01:16:17.607 05:10:59 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1733721058 )) 01:16:17.607 05:10:59 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:16:17.607 05:11:00 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1733721058 )) 01:16:17.607 05:11:00 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:16:17.607 [2024-12-09 05:11:00.055454] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:16:17.607 [2024-12-09 05:11:00.055521] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60390 ] 01:16:17.864 [2024-12-09 05:11:00.208829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:17.864 [2024-12-09 05:11:00.260131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:17.864 [2024-12-09 05:11:00.299754] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:16:18.120  [2024-12-09T05:11:00.576Z] Copying: 512/512 [B] (average 500 kBps) 01:16:18.120 01:16:18.120 05:11:00 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 01:16:18.120 05:11:00 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1733721060 )) 01:16:18.120 01:16:18.120 real 0m2.097s 01:16:18.120 user 0m0.615s 01:16:18.120 sys 0m0.481s 01:16:18.120 05:11:00 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1130 -- # xtrace_disable 01:16:18.120 05:11:00 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 01:16:18.120 ************************************ 01:16:18.120 END TEST dd_flag_noatime 01:16:18.121 ************************************ 01:16:18.377 05:11:00 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 01:16:18.377 05:11:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:16:18.377 05:11:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 01:16:18.377 05:11:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 01:16:18.377 ************************************ 01:16:18.377 START TEST dd_flags_misc 01:16:18.377 ************************************ 01:16:18.377 05:11:00 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1129 -- # io 01:16:18.377 05:11:00 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 01:16:18.377 05:11:00 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 01:16:18.377 05:11:00 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 01:16:18.377 05:11:00 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 01:16:18.377 05:11:00 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 01:16:18.377 05:11:00 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 01:16:18.377 05:11:00 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 01:16:18.377 05:11:00 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 01:16:18.377 05:11:00 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 01:16:18.377 [2024-12-09 05:11:00.659110] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:16:18.377 [2024-12-09 05:11:00.659188] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60420 ] 01:16:18.377 [2024-12-09 05:11:00.808685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:18.633 [2024-12-09 05:11:00.856185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:18.633 [2024-12-09 05:11:00.895783] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:16:18.633  [2024-12-09T05:11:01.347Z] Copying: 512/512 [B] (average 500 kBps) 01:16:18.891 01:16:18.891 05:11:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ id30vqrw4dyw3zinlb59hnebaa5li7lojpcuo22adi7z6f84j78ht3ba0ulh0r7quwalqfqoq4bplkelhmyqxx9p1zb5fa9gfth5rlcybhtffxfgtheu2a1pvhqfs5zxsu3hlc0n8v5uwlydcbgv8im0yfg8pmatwr1nd271rpld5bl8d6djg8c91lg6s5ikbfev9p84v4caf9a1mrp7m99exor49tc1r0e648rx82sjme1g6hxvneovf5znp5efnxd05l358q9l5kmlg429n9s6k8z1e3myepf0qvo2c0imktojzns2v1jb5r8a5nrolwh79h3hx954fxsxoxvr1fw09ksrp499j1zpyf1pf9nksjdxcbsmthwkd5xsc6hp52pqkfu5ncrwr412jy045amhd8c0963teqxyags0uh4bwo2au1jk6520zjop2lxukxu4549j0jf2rg60esd59mfg6m7bucac603hd0fymkq0i99t40w7iung1nk32dlv == \i\d\3\0\v\q\r\w\4\d\y\w\3\z\i\n\l\b\5\9\h\n\e\b\a\a\5\l\i\7\l\o\j\p\c\u\o\2\2\a\d\i\7\z\6\f\8\4\j\7\8\h\t\3\b\a\0\u\l\h\0\r\7\q\u\w\a\l\q\f\q\o\q\4\b\p\l\k\e\l\h\m\y\q\x\x\9\p\1\z\b\5\f\a\9\g\f\t\h\5\r\l\c\y\b\h\t\f\f\x\f\g\t\h\e\u\2\a\1\p\v\h\q\f\s\5\z\x\s\u\3\h\l\c\0\n\8\v\5\u\w\l\y\d\c\b\g\v\8\i\m\0\y\f\g\8\p\m\a\t\w\r\1\n\d\2\7\1\r\p\l\d\5\b\l\8\d\6\d\j\g\8\c\9\1\l\g\6\s\5\i\k\b\f\e\v\9\p\8\4\v\4\c\a\f\9\a\1\m\r\p\7\m\9\9\e\x\o\r\4\9\t\c\1\r\0\e\6\4\8\r\x\8\2\s\j\m\e\1\g\6\h\x\v\n\e\o\v\f\5\z\n\p\5\e\f\n\x\d\0\5\l\3\5\8\q\9\l\5\k\m\l\g\4\2\9\n\9\s\6\k\8\z\1\e\3\m\y\e\p\f\0\q\v\o\2\c\0\i\m\k\t\o\j\z\n\s\2\v\1\j\b\5\r\8\a\5\n\r\o\l\w\h\7\9\h\3\h\x\9\5\4\f\x\s\x\o\x\v\r\1\f\w\0\9\k\s\r\p\4\9\9\j\1\z\p\y\f\1\p\f\9\n\k\s\j\d\x\c\b\s\m\t\h\w\k\d\5\x\s\c\6\h\p\5\2\p\q\k\f\u\5\n\c\r\w\r\4\1\2\j\y\0\4\5\a\m\h\d\8\c\0\9\6\3\t\e\q\x\y\a\g\s\0\u\h\4\b\w\o\2\a\u\1\j\k\6\5\2\0\z\j\o\p\2\l\x\u\k\x\u\4\5\4\9\j\0\j\f\2\r\g\6\0\e\s\d\5\9\m\f\g\6\m\7\b\u\c\a\c\6\0\3\h\d\0\f\y\m\k\q\0\i\9\9\t\4\0\w\7\i\u\n\g\1\n\k\3\2\d\l\v ]] 01:16:18.891 05:11:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 01:16:18.891 05:11:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 01:16:18.891 [2024-12-09 05:11:01.166082] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:16:18.891 [2024-12-09 05:11:01.166174] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60435 ] 01:16:18.891 [2024-12-09 05:11:01.315137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:19.149 [2024-12-09 05:11:01.368914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:19.149 [2024-12-09 05:11:01.410241] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:16:19.149  [2024-12-09T05:11:01.863Z] Copying: 512/512 [B] (average 500 kBps) 01:16:19.407 01:16:19.407 05:11:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ id30vqrw4dyw3zinlb59hnebaa5li7lojpcuo22adi7z6f84j78ht3ba0ulh0r7quwalqfqoq4bplkelhmyqxx9p1zb5fa9gfth5rlcybhtffxfgtheu2a1pvhqfs5zxsu3hlc0n8v5uwlydcbgv8im0yfg8pmatwr1nd271rpld5bl8d6djg8c91lg6s5ikbfev9p84v4caf9a1mrp7m99exor49tc1r0e648rx82sjme1g6hxvneovf5znp5efnxd05l358q9l5kmlg429n9s6k8z1e3myepf0qvo2c0imktojzns2v1jb5r8a5nrolwh79h3hx954fxsxoxvr1fw09ksrp499j1zpyf1pf9nksjdxcbsmthwkd5xsc6hp52pqkfu5ncrwr412jy045amhd8c0963teqxyags0uh4bwo2au1jk6520zjop2lxukxu4549j0jf2rg60esd59mfg6m7bucac603hd0fymkq0i99t40w7iung1nk32dlv == \i\d\3\0\v\q\r\w\4\d\y\w\3\z\i\n\l\b\5\9\h\n\e\b\a\a\5\l\i\7\l\o\j\p\c\u\o\2\2\a\d\i\7\z\6\f\8\4\j\7\8\h\t\3\b\a\0\u\l\h\0\r\7\q\u\w\a\l\q\f\q\o\q\4\b\p\l\k\e\l\h\m\y\q\x\x\9\p\1\z\b\5\f\a\9\g\f\t\h\5\r\l\c\y\b\h\t\f\f\x\f\g\t\h\e\u\2\a\1\p\v\h\q\f\s\5\z\x\s\u\3\h\l\c\0\n\8\v\5\u\w\l\y\d\c\b\g\v\8\i\m\0\y\f\g\8\p\m\a\t\w\r\1\n\d\2\7\1\r\p\l\d\5\b\l\8\d\6\d\j\g\8\c\9\1\l\g\6\s\5\i\k\b\f\e\v\9\p\8\4\v\4\c\a\f\9\a\1\m\r\p\7\m\9\9\e\x\o\r\4\9\t\c\1\r\0\e\6\4\8\r\x\8\2\s\j\m\e\1\g\6\h\x\v\n\e\o\v\f\5\z\n\p\5\e\f\n\x\d\0\5\l\3\5\8\q\9\l\5\k\m\l\g\4\2\9\n\9\s\6\k\8\z\1\e\3\m\y\e\p\f\0\q\v\o\2\c\0\i\m\k\t\o\j\z\n\s\2\v\1\j\b\5\r\8\a\5\n\r\o\l\w\h\7\9\h\3\h\x\9\5\4\f\x\s\x\o\x\v\r\1\f\w\0\9\k\s\r\p\4\9\9\j\1\z\p\y\f\1\p\f\9\n\k\s\j\d\x\c\b\s\m\t\h\w\k\d\5\x\s\c\6\h\p\5\2\p\q\k\f\u\5\n\c\r\w\r\4\1\2\j\y\0\4\5\a\m\h\d\8\c\0\9\6\3\t\e\q\x\y\a\g\s\0\u\h\4\b\w\o\2\a\u\1\j\k\6\5\2\0\z\j\o\p\2\l\x\u\k\x\u\4\5\4\9\j\0\j\f\2\r\g\6\0\e\s\d\5\9\m\f\g\6\m\7\b\u\c\a\c\6\0\3\h\d\0\f\y\m\k\q\0\i\9\9\t\4\0\w\7\i\u\n\g\1\n\k\3\2\d\l\v ]] 01:16:19.407 05:11:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 01:16:19.407 05:11:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 01:16:19.407 [2024-12-09 05:11:01.683672] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:16:19.407 [2024-12-09 05:11:01.683753] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60440 ] 01:16:19.407 [2024-12-09 05:11:01.832487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:19.665 [2024-12-09 05:11:01.886553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:19.665 [2024-12-09 05:11:01.927222] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:16:19.665  [2024-12-09T05:11:02.379Z] Copying: 512/512 [B] (average 100 kBps) 01:16:19.923 01:16:19.923 05:11:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ id30vqrw4dyw3zinlb59hnebaa5li7lojpcuo22adi7z6f84j78ht3ba0ulh0r7quwalqfqoq4bplkelhmyqxx9p1zb5fa9gfth5rlcybhtffxfgtheu2a1pvhqfs5zxsu3hlc0n8v5uwlydcbgv8im0yfg8pmatwr1nd271rpld5bl8d6djg8c91lg6s5ikbfev9p84v4caf9a1mrp7m99exor49tc1r0e648rx82sjme1g6hxvneovf5znp5efnxd05l358q9l5kmlg429n9s6k8z1e3myepf0qvo2c0imktojzns2v1jb5r8a5nrolwh79h3hx954fxsxoxvr1fw09ksrp499j1zpyf1pf9nksjdxcbsmthwkd5xsc6hp52pqkfu5ncrwr412jy045amhd8c0963teqxyags0uh4bwo2au1jk6520zjop2lxukxu4549j0jf2rg60esd59mfg6m7bucac603hd0fymkq0i99t40w7iung1nk32dlv == \i\d\3\0\v\q\r\w\4\d\y\w\3\z\i\n\l\b\5\9\h\n\e\b\a\a\5\l\i\7\l\o\j\p\c\u\o\2\2\a\d\i\7\z\6\f\8\4\j\7\8\h\t\3\b\a\0\u\l\h\0\r\7\q\u\w\a\l\q\f\q\o\q\4\b\p\l\k\e\l\h\m\y\q\x\x\9\p\1\z\b\5\f\a\9\g\f\t\h\5\r\l\c\y\b\h\t\f\f\x\f\g\t\h\e\u\2\a\1\p\v\h\q\f\s\5\z\x\s\u\3\h\l\c\0\n\8\v\5\u\w\l\y\d\c\b\g\v\8\i\m\0\y\f\g\8\p\m\a\t\w\r\1\n\d\2\7\1\r\p\l\d\5\b\l\8\d\6\d\j\g\8\c\9\1\l\g\6\s\5\i\k\b\f\e\v\9\p\8\4\v\4\c\a\f\9\a\1\m\r\p\7\m\9\9\e\x\o\r\4\9\t\c\1\r\0\e\6\4\8\r\x\8\2\s\j\m\e\1\g\6\h\x\v\n\e\o\v\f\5\z\n\p\5\e\f\n\x\d\0\5\l\3\5\8\q\9\l\5\k\m\l\g\4\2\9\n\9\s\6\k\8\z\1\e\3\m\y\e\p\f\0\q\v\o\2\c\0\i\m\k\t\o\j\z\n\s\2\v\1\j\b\5\r\8\a\5\n\r\o\l\w\h\7\9\h\3\h\x\9\5\4\f\x\s\x\o\x\v\r\1\f\w\0\9\k\s\r\p\4\9\9\j\1\z\p\y\f\1\p\f\9\n\k\s\j\d\x\c\b\s\m\t\h\w\k\d\5\x\s\c\6\h\p\5\2\p\q\k\f\u\5\n\c\r\w\r\4\1\2\j\y\0\4\5\a\m\h\d\8\c\0\9\6\3\t\e\q\x\y\a\g\s\0\u\h\4\b\w\o\2\a\u\1\j\k\6\5\2\0\z\j\o\p\2\l\x\u\k\x\u\4\5\4\9\j\0\j\f\2\r\g\6\0\e\s\d\5\9\m\f\g\6\m\7\b\u\c\a\c\6\0\3\h\d\0\f\y\m\k\q\0\i\9\9\t\4\0\w\7\i\u\n\g\1\n\k\3\2\d\l\v ]] 01:16:19.923 05:11:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 01:16:19.923 05:11:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 01:16:19.923 [2024-12-09 05:11:02.197070] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:16:19.923 [2024-12-09 05:11:02.197142] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60455 ] 01:16:19.923 [2024-12-09 05:11:02.334626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:20.181 [2024-12-09 05:11:02.389645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:20.181 [2024-12-09 05:11:02.430467] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:16:20.181  [2024-12-09T05:11:02.895Z] Copying: 512/512 [B] (average 166 kBps) 01:16:20.439 01:16:20.439 05:11:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ id30vqrw4dyw3zinlb59hnebaa5li7lojpcuo22adi7z6f84j78ht3ba0ulh0r7quwalqfqoq4bplkelhmyqxx9p1zb5fa9gfth5rlcybhtffxfgtheu2a1pvhqfs5zxsu3hlc0n8v5uwlydcbgv8im0yfg8pmatwr1nd271rpld5bl8d6djg8c91lg6s5ikbfev9p84v4caf9a1mrp7m99exor49tc1r0e648rx82sjme1g6hxvneovf5znp5efnxd05l358q9l5kmlg429n9s6k8z1e3myepf0qvo2c0imktojzns2v1jb5r8a5nrolwh79h3hx954fxsxoxvr1fw09ksrp499j1zpyf1pf9nksjdxcbsmthwkd5xsc6hp52pqkfu5ncrwr412jy045amhd8c0963teqxyags0uh4bwo2au1jk6520zjop2lxukxu4549j0jf2rg60esd59mfg6m7bucac603hd0fymkq0i99t40w7iung1nk32dlv == \i\d\3\0\v\q\r\w\4\d\y\w\3\z\i\n\l\b\5\9\h\n\e\b\a\a\5\l\i\7\l\o\j\p\c\u\o\2\2\a\d\i\7\z\6\f\8\4\j\7\8\h\t\3\b\a\0\u\l\h\0\r\7\q\u\w\a\l\q\f\q\o\q\4\b\p\l\k\e\l\h\m\y\q\x\x\9\p\1\z\b\5\f\a\9\g\f\t\h\5\r\l\c\y\b\h\t\f\f\x\f\g\t\h\e\u\2\a\1\p\v\h\q\f\s\5\z\x\s\u\3\h\l\c\0\n\8\v\5\u\w\l\y\d\c\b\g\v\8\i\m\0\y\f\g\8\p\m\a\t\w\r\1\n\d\2\7\1\r\p\l\d\5\b\l\8\d\6\d\j\g\8\c\9\1\l\g\6\s\5\i\k\b\f\e\v\9\p\8\4\v\4\c\a\f\9\a\1\m\r\p\7\m\9\9\e\x\o\r\4\9\t\c\1\r\0\e\6\4\8\r\x\8\2\s\j\m\e\1\g\6\h\x\v\n\e\o\v\f\5\z\n\p\5\e\f\n\x\d\0\5\l\3\5\8\q\9\l\5\k\m\l\g\4\2\9\n\9\s\6\k\8\z\1\e\3\m\y\e\p\f\0\q\v\o\2\c\0\i\m\k\t\o\j\z\n\s\2\v\1\j\b\5\r\8\a\5\n\r\o\l\w\h\7\9\h\3\h\x\9\5\4\f\x\s\x\o\x\v\r\1\f\w\0\9\k\s\r\p\4\9\9\j\1\z\p\y\f\1\p\f\9\n\k\s\j\d\x\c\b\s\m\t\h\w\k\d\5\x\s\c\6\h\p\5\2\p\q\k\f\u\5\n\c\r\w\r\4\1\2\j\y\0\4\5\a\m\h\d\8\c\0\9\6\3\t\e\q\x\y\a\g\s\0\u\h\4\b\w\o\2\a\u\1\j\k\6\5\2\0\z\j\o\p\2\l\x\u\k\x\u\4\5\4\9\j\0\j\f\2\r\g\6\0\e\s\d\5\9\m\f\g\6\m\7\b\u\c\a\c\6\0\3\h\d\0\f\y\m\k\q\0\i\9\9\t\4\0\w\7\i\u\n\g\1\n\k\3\2\d\l\v ]] 01:16:20.439 05:11:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 01:16:20.439 05:11:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 01:16:20.439 05:11:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 01:16:20.439 05:11:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 01:16:20.439 05:11:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 01:16:20.439 05:11:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 01:16:20.439 [2024-12-09 05:11:02.721256] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:16:20.439 [2024-12-09 05:11:02.721367] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60459 ] 01:16:20.439 [2024-12-09 05:11:02.874106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:20.697 [2024-12-09 05:11:02.929352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:20.697 [2024-12-09 05:11:02.970063] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:16:20.697  [2024-12-09T05:11:03.412Z] Copying: 512/512 [B] (average 500 kBps) 01:16:20.956 01:16:20.956 05:11:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ o5yds3gbn3udjhvv6gh7cfq9v9c9e5oz6kysw1xxcu5cttmbti3j0b2vs3p5tefxca8qqpdkmytfzuoffyid4emnw9avnhg21cp0j94opx4q8qobomn914n8n976ujr05s8ss81zudc3gs6j5n32jc9ev1vrsut9nnmamod0tcngfjjqd8unj770fbezpa4oy7mtf5c85974t5iu0eikbqd574w7d84arw0ol1o1bxmjrp7ar2ldcziqk5h8lx23cf6zqc9uvzy4idh19hwue11d8dbxbunacu5jlzvbt9o8n2p5pgmow706yidnj5em3mwrlw0wiudwn40q2b66c0jru32kzs21d1gupch5krhmein8nefydocnrm9jmm5seu0hj0e0ro2j2b8irfgxw97zcywcqikrfo32pf27dd2vo1pjcjsei4it9yuksoi7sn2o5l4jzgjw8qitnyawvp0xbq51813t46r1ustprm8g0fznd4amyl728idi0ro3 == \o\5\y\d\s\3\g\b\n\3\u\d\j\h\v\v\6\g\h\7\c\f\q\9\v\9\c\9\e\5\o\z\6\k\y\s\w\1\x\x\c\u\5\c\t\t\m\b\t\i\3\j\0\b\2\v\s\3\p\5\t\e\f\x\c\a\8\q\q\p\d\k\m\y\t\f\z\u\o\f\f\y\i\d\4\e\m\n\w\9\a\v\n\h\g\2\1\c\p\0\j\9\4\o\p\x\4\q\8\q\o\b\o\m\n\9\1\4\n\8\n\9\7\6\u\j\r\0\5\s\8\s\s\8\1\z\u\d\c\3\g\s\6\j\5\n\3\2\j\c\9\e\v\1\v\r\s\u\t\9\n\n\m\a\m\o\d\0\t\c\n\g\f\j\j\q\d\8\u\n\j\7\7\0\f\b\e\z\p\a\4\o\y\7\m\t\f\5\c\8\5\9\7\4\t\5\i\u\0\e\i\k\b\q\d\5\7\4\w\7\d\8\4\a\r\w\0\o\l\1\o\1\b\x\m\j\r\p\7\a\r\2\l\d\c\z\i\q\k\5\h\8\l\x\2\3\c\f\6\z\q\c\9\u\v\z\y\4\i\d\h\1\9\h\w\u\e\1\1\d\8\d\b\x\b\u\n\a\c\u\5\j\l\z\v\b\t\9\o\8\n\2\p\5\p\g\m\o\w\7\0\6\y\i\d\n\j\5\e\m\3\m\w\r\l\w\0\w\i\u\d\w\n\4\0\q\2\b\6\6\c\0\j\r\u\3\2\k\z\s\2\1\d\1\g\u\p\c\h\5\k\r\h\m\e\i\n\8\n\e\f\y\d\o\c\n\r\m\9\j\m\m\5\s\e\u\0\h\j\0\e\0\r\o\2\j\2\b\8\i\r\f\g\x\w\9\7\z\c\y\w\c\q\i\k\r\f\o\3\2\p\f\2\7\d\d\2\v\o\1\p\j\c\j\s\e\i\4\i\t\9\y\u\k\s\o\i\7\s\n\2\o\5\l\4\j\z\g\j\w\8\q\i\t\n\y\a\w\v\p\0\x\b\q\5\1\8\1\3\t\4\6\r\1\u\s\t\p\r\m\8\g\0\f\z\n\d\4\a\m\y\l\7\2\8\i\d\i\0\r\o\3 ]] 01:16:20.956 05:11:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 01:16:20.956 05:11:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 01:16:20.956 [2024-12-09 05:11:03.233852] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:16:20.956 [2024-12-09 05:11:03.233933] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60473 ] 01:16:20.956 [2024-12-09 05:11:03.382703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:21.214 [2024-12-09 05:11:03.436246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:21.214 [2024-12-09 05:11:03.476393] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:16:21.214  [2024-12-09T05:11:03.929Z] Copying: 512/512 [B] (average 500 kBps) 01:16:21.473 01:16:21.473 05:11:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ o5yds3gbn3udjhvv6gh7cfq9v9c9e5oz6kysw1xxcu5cttmbti3j0b2vs3p5tefxca8qqpdkmytfzuoffyid4emnw9avnhg21cp0j94opx4q8qobomn914n8n976ujr05s8ss81zudc3gs6j5n32jc9ev1vrsut9nnmamod0tcngfjjqd8unj770fbezpa4oy7mtf5c85974t5iu0eikbqd574w7d84arw0ol1o1bxmjrp7ar2ldcziqk5h8lx23cf6zqc9uvzy4idh19hwue11d8dbxbunacu5jlzvbt9o8n2p5pgmow706yidnj5em3mwrlw0wiudwn40q2b66c0jru32kzs21d1gupch5krhmein8nefydocnrm9jmm5seu0hj0e0ro2j2b8irfgxw97zcywcqikrfo32pf27dd2vo1pjcjsei4it9yuksoi7sn2o5l4jzgjw8qitnyawvp0xbq51813t46r1ustprm8g0fznd4amyl728idi0ro3 == \o\5\y\d\s\3\g\b\n\3\u\d\j\h\v\v\6\g\h\7\c\f\q\9\v\9\c\9\e\5\o\z\6\k\y\s\w\1\x\x\c\u\5\c\t\t\m\b\t\i\3\j\0\b\2\v\s\3\p\5\t\e\f\x\c\a\8\q\q\p\d\k\m\y\t\f\z\u\o\f\f\y\i\d\4\e\m\n\w\9\a\v\n\h\g\2\1\c\p\0\j\9\4\o\p\x\4\q\8\q\o\b\o\m\n\9\1\4\n\8\n\9\7\6\u\j\r\0\5\s\8\s\s\8\1\z\u\d\c\3\g\s\6\j\5\n\3\2\j\c\9\e\v\1\v\r\s\u\t\9\n\n\m\a\m\o\d\0\t\c\n\g\f\j\j\q\d\8\u\n\j\7\7\0\f\b\e\z\p\a\4\o\y\7\m\t\f\5\c\8\5\9\7\4\t\5\i\u\0\e\i\k\b\q\d\5\7\4\w\7\d\8\4\a\r\w\0\o\l\1\o\1\b\x\m\j\r\p\7\a\r\2\l\d\c\z\i\q\k\5\h\8\l\x\2\3\c\f\6\z\q\c\9\u\v\z\y\4\i\d\h\1\9\h\w\u\e\1\1\d\8\d\b\x\b\u\n\a\c\u\5\j\l\z\v\b\t\9\o\8\n\2\p\5\p\g\m\o\w\7\0\6\y\i\d\n\j\5\e\m\3\m\w\r\l\w\0\w\i\u\d\w\n\4\0\q\2\b\6\6\c\0\j\r\u\3\2\k\z\s\2\1\d\1\g\u\p\c\h\5\k\r\h\m\e\i\n\8\n\e\f\y\d\o\c\n\r\m\9\j\m\m\5\s\e\u\0\h\j\0\e\0\r\o\2\j\2\b\8\i\r\f\g\x\w\9\7\z\c\y\w\c\q\i\k\r\f\o\3\2\p\f\2\7\d\d\2\v\o\1\p\j\c\j\s\e\i\4\i\t\9\y\u\k\s\o\i\7\s\n\2\o\5\l\4\j\z\g\j\w\8\q\i\t\n\y\a\w\v\p\0\x\b\q\5\1\8\1\3\t\4\6\r\1\u\s\t\p\r\m\8\g\0\f\z\n\d\4\a\m\y\l\7\2\8\i\d\i\0\r\o\3 ]] 01:16:21.473 05:11:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 01:16:21.473 05:11:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 01:16:21.473 [2024-12-09 05:11:03.752190] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:16:21.473 [2024-12-09 05:11:03.752264] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60478 ] 01:16:21.473 [2024-12-09 05:11:03.905206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:21.731 [2024-12-09 05:11:03.958895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:21.731 [2024-12-09 05:11:03.999922] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:16:21.731  [2024-12-09T05:11:04.446Z] Copying: 512/512 [B] (average 166 kBps) 01:16:21.990 01:16:21.990 05:11:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ o5yds3gbn3udjhvv6gh7cfq9v9c9e5oz6kysw1xxcu5cttmbti3j0b2vs3p5tefxca8qqpdkmytfzuoffyid4emnw9avnhg21cp0j94opx4q8qobomn914n8n976ujr05s8ss81zudc3gs6j5n32jc9ev1vrsut9nnmamod0tcngfjjqd8unj770fbezpa4oy7mtf5c85974t5iu0eikbqd574w7d84arw0ol1o1bxmjrp7ar2ldcziqk5h8lx23cf6zqc9uvzy4idh19hwue11d8dbxbunacu5jlzvbt9o8n2p5pgmow706yidnj5em3mwrlw0wiudwn40q2b66c0jru32kzs21d1gupch5krhmein8nefydocnrm9jmm5seu0hj0e0ro2j2b8irfgxw97zcywcqikrfo32pf27dd2vo1pjcjsei4it9yuksoi7sn2o5l4jzgjw8qitnyawvp0xbq51813t46r1ustprm8g0fznd4amyl728idi0ro3 == \o\5\y\d\s\3\g\b\n\3\u\d\j\h\v\v\6\g\h\7\c\f\q\9\v\9\c\9\e\5\o\z\6\k\y\s\w\1\x\x\c\u\5\c\t\t\m\b\t\i\3\j\0\b\2\v\s\3\p\5\t\e\f\x\c\a\8\q\q\p\d\k\m\y\t\f\z\u\o\f\f\y\i\d\4\e\m\n\w\9\a\v\n\h\g\2\1\c\p\0\j\9\4\o\p\x\4\q\8\q\o\b\o\m\n\9\1\4\n\8\n\9\7\6\u\j\r\0\5\s\8\s\s\8\1\z\u\d\c\3\g\s\6\j\5\n\3\2\j\c\9\e\v\1\v\r\s\u\t\9\n\n\m\a\m\o\d\0\t\c\n\g\f\j\j\q\d\8\u\n\j\7\7\0\f\b\e\z\p\a\4\o\y\7\m\t\f\5\c\8\5\9\7\4\t\5\i\u\0\e\i\k\b\q\d\5\7\4\w\7\d\8\4\a\r\w\0\o\l\1\o\1\b\x\m\j\r\p\7\a\r\2\l\d\c\z\i\q\k\5\h\8\l\x\2\3\c\f\6\z\q\c\9\u\v\z\y\4\i\d\h\1\9\h\w\u\e\1\1\d\8\d\b\x\b\u\n\a\c\u\5\j\l\z\v\b\t\9\o\8\n\2\p\5\p\g\m\o\w\7\0\6\y\i\d\n\j\5\e\m\3\m\w\r\l\w\0\w\i\u\d\w\n\4\0\q\2\b\6\6\c\0\j\r\u\3\2\k\z\s\2\1\d\1\g\u\p\c\h\5\k\r\h\m\e\i\n\8\n\e\f\y\d\o\c\n\r\m\9\j\m\m\5\s\e\u\0\h\j\0\e\0\r\o\2\j\2\b\8\i\r\f\g\x\w\9\7\z\c\y\w\c\q\i\k\r\f\o\3\2\p\f\2\7\d\d\2\v\o\1\p\j\c\j\s\e\i\4\i\t\9\y\u\k\s\o\i\7\s\n\2\o\5\l\4\j\z\g\j\w\8\q\i\t\n\y\a\w\v\p\0\x\b\q\5\1\8\1\3\t\4\6\r\1\u\s\t\p\r\m\8\g\0\f\z\n\d\4\a\m\y\l\7\2\8\i\d\i\0\r\o\3 ]] 01:16:21.990 05:11:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 01:16:21.990 05:11:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 01:16:21.990 [2024-12-09 05:11:04.276317] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:16:21.990 [2024-12-09 05:11:04.276416] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60492 ] 01:16:21.990 [2024-12-09 05:11:04.428135] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:22.248 [2024-12-09 05:11:04.482857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:22.248 [2024-12-09 05:11:04.523910] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:16:22.248  [2024-12-09T05:11:04.962Z] Copying: 512/512 [B] (average 100 kBps) 01:16:22.506 01:16:22.506 05:11:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ o5yds3gbn3udjhvv6gh7cfq9v9c9e5oz6kysw1xxcu5cttmbti3j0b2vs3p5tefxca8qqpdkmytfzuoffyid4emnw9avnhg21cp0j94opx4q8qobomn914n8n976ujr05s8ss81zudc3gs6j5n32jc9ev1vrsut9nnmamod0tcngfjjqd8unj770fbezpa4oy7mtf5c85974t5iu0eikbqd574w7d84arw0ol1o1bxmjrp7ar2ldcziqk5h8lx23cf6zqc9uvzy4idh19hwue11d8dbxbunacu5jlzvbt9o8n2p5pgmow706yidnj5em3mwrlw0wiudwn40q2b66c0jru32kzs21d1gupch5krhmein8nefydocnrm9jmm5seu0hj0e0ro2j2b8irfgxw97zcywcqikrfo32pf27dd2vo1pjcjsei4it9yuksoi7sn2o5l4jzgjw8qitnyawvp0xbq51813t46r1ustprm8g0fznd4amyl728idi0ro3 == \o\5\y\d\s\3\g\b\n\3\u\d\j\h\v\v\6\g\h\7\c\f\q\9\v\9\c\9\e\5\o\z\6\k\y\s\w\1\x\x\c\u\5\c\t\t\m\b\t\i\3\j\0\b\2\v\s\3\p\5\t\e\f\x\c\a\8\q\q\p\d\k\m\y\t\f\z\u\o\f\f\y\i\d\4\e\m\n\w\9\a\v\n\h\g\2\1\c\p\0\j\9\4\o\p\x\4\q\8\q\o\b\o\m\n\9\1\4\n\8\n\9\7\6\u\j\r\0\5\s\8\s\s\8\1\z\u\d\c\3\g\s\6\j\5\n\3\2\j\c\9\e\v\1\v\r\s\u\t\9\n\n\m\a\m\o\d\0\t\c\n\g\f\j\j\q\d\8\u\n\j\7\7\0\f\b\e\z\p\a\4\o\y\7\m\t\f\5\c\8\5\9\7\4\t\5\i\u\0\e\i\k\b\q\d\5\7\4\w\7\d\8\4\a\r\w\0\o\l\1\o\1\b\x\m\j\r\p\7\a\r\2\l\d\c\z\i\q\k\5\h\8\l\x\2\3\c\f\6\z\q\c\9\u\v\z\y\4\i\d\h\1\9\h\w\u\e\1\1\d\8\d\b\x\b\u\n\a\c\u\5\j\l\z\v\b\t\9\o\8\n\2\p\5\p\g\m\o\w\7\0\6\y\i\d\n\j\5\e\m\3\m\w\r\l\w\0\w\i\u\d\w\n\4\0\q\2\b\6\6\c\0\j\r\u\3\2\k\z\s\2\1\d\1\g\u\p\c\h\5\k\r\h\m\e\i\n\8\n\e\f\y\d\o\c\n\r\m\9\j\m\m\5\s\e\u\0\h\j\0\e\0\r\o\2\j\2\b\8\i\r\f\g\x\w\9\7\z\c\y\w\c\q\i\k\r\f\o\3\2\p\f\2\7\d\d\2\v\o\1\p\j\c\j\s\e\i\4\i\t\9\y\u\k\s\o\i\7\s\n\2\o\5\l\4\j\z\g\j\w\8\q\i\t\n\y\a\w\v\p\0\x\b\q\5\1\8\1\3\t\4\6\r\1\u\s\t\p\r\m\8\g\0\f\z\n\d\4\a\m\y\l\7\2\8\i\d\i\0\r\o\3 ]] 01:16:22.506 01:16:22.506 real 0m4.160s 01:16:22.506 user 0m2.434s 01:16:22.506 sys 0m1.755s 01:16:22.506 05:11:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1130 -- # xtrace_disable 01:16:22.506 05:11:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 01:16:22.506 ************************************ 01:16:22.506 END TEST dd_flags_misc 01:16:22.506 ************************************ 01:16:22.506 05:11:04 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 01:16:22.506 05:11:04 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 01:16:22.506 * Second test run, disabling liburing, forcing AIO 01:16:22.506 05:11:04 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 01:16:22.506 05:11:04 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 01:16:22.506 05:11:04 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:16:22.506 05:11:04 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 01:16:22.506 05:11:04 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 01:16:22.506 ************************************ 01:16:22.507 START TEST dd_flag_append_forced_aio 01:16:22.507 ************************************ 01:16:22.507 05:11:04 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1129 -- # append 01:16:22.507 05:11:04 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 01:16:22.507 05:11:04 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 01:16:22.507 05:11:04 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 01:16:22.507 05:11:04 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 01:16:22.507 05:11:04 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 01:16:22.507 05:11:04 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=zk17n3ae88wzsvnqe7pvo2jn0phr4m01 01:16:22.507 05:11:04 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 01:16:22.507 05:11:04 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 01:16:22.507 05:11:04 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 01:16:22.507 05:11:04 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=xhe6dceppoupt1zyyyfqjg4k0rscnt5z 01:16:22.507 05:11:04 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s zk17n3ae88wzsvnqe7pvo2jn0phr4m01 01:16:22.507 05:11:04 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s xhe6dceppoupt1zyyyfqjg4k0rscnt5z 01:16:22.507 05:11:04 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 01:16:22.507 [2024-12-09 05:11:04.882630] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:16:22.507 [2024-12-09 05:11:04.882716] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60516 ] 01:16:22.764 [2024-12-09 05:11:05.013834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:22.764 [2024-12-09 05:11:05.076953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:22.764 [2024-12-09 05:11:05.128939] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:16:22.764  [2024-12-09T05:11:05.476Z] Copying: 32/32 [B] (average 31 kBps) 01:16:23.020 01:16:23.021 05:11:05 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ xhe6dceppoupt1zyyyfqjg4k0rscnt5zzk17n3ae88wzsvnqe7pvo2jn0phr4m01 == \x\h\e\6\d\c\e\p\p\o\u\p\t\1\z\y\y\y\f\q\j\g\4\k\0\r\s\c\n\t\5\z\z\k\1\7\n\3\a\e\8\8\w\z\s\v\n\q\e\7\p\v\o\2\j\n\0\p\h\r\4\m\0\1 ]] 01:16:23.021 01:16:23.021 real 0m0.558s 01:16:23.021 user 0m0.294s 01:16:23.021 sys 0m0.142s 01:16:23.021 05:11:05 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 01:16:23.021 05:11:05 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 01:16:23.021 ************************************ 01:16:23.021 END TEST dd_flag_append_forced_aio 01:16:23.021 ************************************ 01:16:23.021 05:11:05 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 01:16:23.021 05:11:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:16:23.021 05:11:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 01:16:23.021 05:11:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 01:16:23.021 ************************************ 01:16:23.021 START TEST dd_flag_directory_forced_aio 01:16:23.021 ************************************ 01:16:23.021 05:11:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1129 -- # directory 01:16:23.021 05:11:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 01:16:23.021 05:11:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 01:16:23.021 05:11:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 01:16:23.021 05:11:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:16:23.021 05:11:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:16:23.021 05:11:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:16:23.021 05:11:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:16:23.021 05:11:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:16:23.021 05:11:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:16:23.021 05:11:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:16:23.021 05:11:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 01:16:23.021 05:11:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 01:16:23.278 [2024-12-09 05:11:05.518014] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:16:23.278 [2024-12-09 05:11:05.518100] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60548 ] 01:16:23.278 [2024-12-09 05:11:05.671675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:23.278 [2024-12-09 05:11:05.725779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:23.536 [2024-12-09 05:11:05.766447] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:16:23.536 [2024-12-09 05:11:05.797867] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 01:16:23.536 [2024-12-09 05:11:05.797914] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 01:16:23.536 [2024-12-09 05:11:05.797925] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:16:23.536 [2024-12-09 05:11:05.894548] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 01:16:23.794 05:11:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 01:16:23.794 05:11:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:16:23.794 05:11:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 01:16:23.794 05:11:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 01:16:23.794 05:11:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 01:16:23.794 05:11:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:16:23.794 05:11:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 01:16:23.794 05:11:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 01:16:23.794 05:11:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 01:16:23.794 05:11:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:16:23.794 05:11:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:16:23.794 05:11:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:16:23.794 05:11:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:16:23.794 05:11:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:16:23.794 05:11:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:16:23.794 05:11:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:16:23.794 05:11:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 01:16:23.794 05:11:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 01:16:23.794 [2024-12-09 05:11:06.040992] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:16:23.794 [2024-12-09 05:11:06.041063] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60552 ] 01:16:23.794 [2024-12-09 05:11:06.183932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:23.794 [2024-12-09 05:11:06.238506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:24.054 [2024-12-09 05:11:06.279143] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:16:24.054 [2024-12-09 05:11:06.309730] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 01:16:24.054 [2024-12-09 05:11:06.309772] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 01:16:24.054 [2024-12-09 05:11:06.309784] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:16:24.054 [2024-12-09 05:11:06.408110] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 01:16:24.054 05:11:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 01:16:24.054 05:11:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:16:24.054 05:11:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 01:16:24.320 05:11:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 01:16:24.320 05:11:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 01:16:24.320 05:11:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:16:24.320 01:16:24.320 real 0m1.059s 01:16:24.320 user 0m0.611s 01:16:24.320 sys 0m0.238s 01:16:24.320 05:11:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 01:16:24.320 05:11:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 01:16:24.320 ************************************ 01:16:24.320 END TEST dd_flag_directory_forced_aio 01:16:24.320 ************************************ 01:16:24.320 05:11:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 01:16:24.320 05:11:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:16:24.320 05:11:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 01:16:24.320 05:11:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 01:16:24.320 ************************************ 01:16:24.320 START TEST dd_flag_nofollow_forced_aio 01:16:24.320 ************************************ 01:16:24.320 05:11:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1129 -- # nofollow 01:16:24.320 05:11:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 01:16:24.320 05:11:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 01:16:24.320 05:11:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 01:16:24.320 05:11:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 01:16:24.320 05:11:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:16:24.320 05:11:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 01:16:24.320 05:11:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:16:24.320 05:11:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:16:24.320 05:11:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:16:24.320 05:11:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:16:24.320 05:11:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:16:24.320 05:11:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:16:24.320 05:11:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:16:24.320 05:11:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:16:24.320 05:11:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 01:16:24.320 05:11:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:16:24.320 [2024-12-09 05:11:06.651186] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:16:24.320 [2024-12-09 05:11:06.651253] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60586 ] 01:16:24.589 [2024-12-09 05:11:06.804935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:24.589 [2024-12-09 05:11:06.860635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:24.589 [2024-12-09 05:11:06.901152] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:16:24.589 [2024-12-09 05:11:06.932391] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 01:16:24.589 [2024-12-09 05:11:06.932430] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 01:16:24.589 [2024-12-09 05:11:06.932441] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:16:24.589 [2024-12-09 05:11:07.028122] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 01:16:24.848 05:11:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 01:16:24.848 05:11:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:16:24.848 05:11:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 01:16:24.848 05:11:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 01:16:24.848 05:11:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 01:16:24.848 05:11:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:16:24.848 05:11:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 01:16:24.848 05:11:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 01:16:24.848 05:11:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 01:16:24.848 05:11:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:16:24.848 05:11:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:16:24.848 05:11:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:16:24.848 05:11:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:16:24.848 05:11:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:16:24.848 05:11:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:16:24.848 05:11:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:16:24.849 05:11:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 01:16:24.849 05:11:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 01:16:24.849 [2024-12-09 05:11:07.180886] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:16:24.849 [2024-12-09 05:11:07.180955] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60590 ] 01:16:25.107 [2024-12-09 05:11:07.331943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:25.107 [2024-12-09 05:11:07.385173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:25.107 [2024-12-09 05:11:07.426960] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:16:25.107 [2024-12-09 05:11:07.459334] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 01:16:25.107 [2024-12-09 05:11:07.459362] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 01:16:25.107 [2024-12-09 05:11:07.459376] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:16:25.107 [2024-12-09 05:11:07.555623] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 01:16:25.367 05:11:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 01:16:25.367 05:11:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:16:25.367 05:11:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 01:16:25.367 05:11:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 01:16:25.367 05:11:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 01:16:25.367 05:11:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:16:25.367 05:11:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 01:16:25.367 05:11:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 01:16:25.367 05:11:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 01:16:25.367 05:11:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:16:25.367 [2024-12-09 05:11:07.705778] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:16:25.367 [2024-12-09 05:11:07.705844] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60603 ] 01:16:25.626 [2024-12-09 05:11:07.858204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:25.626 [2024-12-09 05:11:07.909666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:25.626 [2024-12-09 05:11:07.950556] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:16:25.626  [2024-12-09T05:11:08.341Z] Copying: 512/512 [B] (average 500 kBps) 01:16:25.885 01:16:25.885 05:11:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ eqz4lhz6ypdf6vq2rtkhsmc8rwv10wscgumk4yss4txx0x4vdublaheks46xdle9vwoe06j2q5t7x5bf282zilj8rcnzzr2r74x36khezu768yhtw9j1zb6no0tw2gtyipcvw8797u0qe0ak8pvku4wvsrcu5ozaw8q9eycqzl4fnwb9z0xj8f72221qoyd305bktnwl8j1ai3c27duz0egdps77uwki42a1jn91hkngkstbr7x5kc6uwqhvf0m4lrybkr3sczs2txr6ucqgxfht0831ak3liubldlb8xzn32k5un6glfhnnkxk3s4z031h32lmag0x6wzpit1zworq0pakqujgq1qt4sn0o1xy6p6o4ao1tb7tt89ts4194nztcwaipmsupkn7pcxyu655qeorsz8ct6ir2g8ed90c6vvb3x3u6luyw6q573ibe0nudpqfgirup88i9codj3s8p7sp461ifksl4tcoyjwao4tvfmp5ihk975ptvjf2j == \e\q\z\4\l\h\z\6\y\p\d\f\6\v\q\2\r\t\k\h\s\m\c\8\r\w\v\1\0\w\s\c\g\u\m\k\4\y\s\s\4\t\x\x\0\x\4\v\d\u\b\l\a\h\e\k\s\4\6\x\d\l\e\9\v\w\o\e\0\6\j\2\q\5\t\7\x\5\b\f\2\8\2\z\i\l\j\8\r\c\n\z\z\r\2\r\7\4\x\3\6\k\h\e\z\u\7\6\8\y\h\t\w\9\j\1\z\b\6\n\o\0\t\w\2\g\t\y\i\p\c\v\w\8\7\9\7\u\0\q\e\0\a\k\8\p\v\k\u\4\w\v\s\r\c\u\5\o\z\a\w\8\q\9\e\y\c\q\z\l\4\f\n\w\b\9\z\0\x\j\8\f\7\2\2\2\1\q\o\y\d\3\0\5\b\k\t\n\w\l\8\j\1\a\i\3\c\2\7\d\u\z\0\e\g\d\p\s\7\7\u\w\k\i\4\2\a\1\j\n\9\1\h\k\n\g\k\s\t\b\r\7\x\5\k\c\6\u\w\q\h\v\f\0\m\4\l\r\y\b\k\r\3\s\c\z\s\2\t\x\r\6\u\c\q\g\x\f\h\t\0\8\3\1\a\k\3\l\i\u\b\l\d\l\b\8\x\z\n\3\2\k\5\u\n\6\g\l\f\h\n\n\k\x\k\3\s\4\z\0\3\1\h\3\2\l\m\a\g\0\x\6\w\z\p\i\t\1\z\w\o\r\q\0\p\a\k\q\u\j\g\q\1\q\t\4\s\n\0\o\1\x\y\6\p\6\o\4\a\o\1\t\b\7\t\t\8\9\t\s\4\1\9\4\n\z\t\c\w\a\i\p\m\s\u\p\k\n\7\p\c\x\y\u\6\5\5\q\e\o\r\s\z\8\c\t\6\i\r\2\g\8\e\d\9\0\c\6\v\v\b\3\x\3\u\6\l\u\y\w\6\q\5\7\3\i\b\e\0\n\u\d\p\q\f\g\i\r\u\p\8\8\i\9\c\o\d\j\3\s\8\p\7\s\p\4\6\1\i\f\k\s\l\4\t\c\o\y\j\w\a\o\4\t\v\f\m\p\5\i\h\k\9\7\5\p\t\v\j\f\2\j ]] 01:16:25.885 01:16:25.885 real 0m1.613s 01:16:25.885 user 0m0.933s 01:16:25.885 sys 0m0.350s 01:16:25.885 05:11:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 01:16:25.885 ************************************ 01:16:25.885 END TEST dd_flag_nofollow_forced_aio 01:16:25.885 ************************************ 01:16:25.885 05:11:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 01:16:25.885 05:11:08 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 01:16:25.885 05:11:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:16:25.885 05:11:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 01:16:25.886 05:11:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 01:16:25.886 ************************************ 01:16:25.886 START TEST dd_flag_noatime_forced_aio 01:16:25.886 ************************************ 01:16:25.886 05:11:08 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1129 -- # noatime 01:16:25.886 05:11:08 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 01:16:25.886 05:11:08 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 01:16:25.886 05:11:08 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 01:16:25.886 05:11:08 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 01:16:25.886 05:11:08 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 01:16:25.886 05:11:08 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 01:16:25.886 05:11:08 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1733721067 01:16:25.886 05:11:08 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:16:25.886 05:11:08 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1733721068 01:16:25.886 05:11:08 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 01:16:27.261 05:11:09 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:16:27.261 [2024-12-09 05:11:09.345348] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:16:27.261 [2024-12-09 05:11:09.345408] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60640 ] 01:16:27.261 [2024-12-09 05:11:09.489707] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:27.261 [2024-12-09 05:11:09.546097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:27.261 [2024-12-09 05:11:09.587803] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:16:27.261  [2024-12-09T05:11:09.976Z] Copying: 512/512 [B] (average 500 kBps) 01:16:27.520 01:16:27.520 05:11:09 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 01:16:27.520 05:11:09 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1733721067 )) 01:16:27.520 05:11:09 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:16:27.520 05:11:09 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1733721068 )) 01:16:27.520 05:11:09 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:16:27.520 [2024-12-09 05:11:09.896686] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:16:27.520 [2024-12-09 05:11:09.896769] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60657 ] 01:16:27.779 [2024-12-09 05:11:10.047132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:27.779 [2024-12-09 05:11:10.101097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:27.779 [2024-12-09 05:11:10.141951] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:16:27.779  [2024-12-09T05:11:10.493Z] Copying: 512/512 [B] (average 500 kBps) 01:16:28.037 01:16:28.037 05:11:10 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 01:16:28.037 05:11:10 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1733721070 )) 01:16:28.037 01:16:28.037 real 0m2.133s 01:16:28.037 user 0m0.617s 01:16:28.037 sys 0m0.276s 01:16:28.037 05:11:10 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 01:16:28.037 05:11:10 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 01:16:28.037 ************************************ 01:16:28.037 END TEST dd_flag_noatime_forced_aio 01:16:28.037 ************************************ 01:16:28.037 05:11:10 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 01:16:28.037 05:11:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:16:28.037 05:11:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 01:16:28.037 05:11:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 01:16:28.037 ************************************ 01:16:28.037 START TEST dd_flags_misc_forced_aio 01:16:28.037 ************************************ 01:16:28.037 05:11:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1129 -- # io 01:16:28.037 05:11:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 01:16:28.037 05:11:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 01:16:28.037 05:11:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 01:16:28.037 05:11:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 01:16:28.037 05:11:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 01:16:28.037 05:11:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 01:16:28.037 05:11:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 01:16:28.037 05:11:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 01:16:28.037 05:11:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 01:16:28.295 [2024-12-09 05:11:10.529373] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:16:28.295 [2024-12-09 05:11:10.529467] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60686 ] 01:16:28.295 [2024-12-09 05:11:10.662170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:28.295 [2024-12-09 05:11:10.722182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:28.553 [2024-12-09 05:11:10.766729] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:16:28.553  [2024-12-09T05:11:11.009Z] Copying: 512/512 [B] (average 500 kBps) 01:16:28.553 01:16:28.811 05:11:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ cp9vijhzz21jlxak7fzs23kd19hw4mk846a2pa20y5oifg1vmiml5tnf2kho2hiewas3j44kujxh15k6sczlcj87d1ad29zcf5g7beddeorae56ekro2ebnn2dyvg9dyvnjdpz3f2jnm7jq5cf9xbl4i0r8vgclfffpruzayv678x8hd2j65d5b7ekq28aak9a4m3c9q2otfc9o1uourfk5j2ikw4w06a9lelth2n5dvuq9iuzgg1txsielf99pusonsdwpkvpwyl185o6d2hu7m3ensbv5x4008r0no938j4yyucr457kgdxycfa228p0fcga28qkgxa7q61vb277ekbc1rn475di8c0454rq0evzsumj0j8o00cq8x92bgwnvwefv7yzqikie994wluit2hec3pfcwx0uhvfq3kj9nem7aemeu55xddm96svzvdln2aqt261uh9i9amx8oa7wwgqagicg6r3hmac2dcv3wvh8yedqwwgr67yzo9792 == \c\p\9\v\i\j\h\z\z\2\1\j\l\x\a\k\7\f\z\s\2\3\k\d\1\9\h\w\4\m\k\8\4\6\a\2\p\a\2\0\y\5\o\i\f\g\1\v\m\i\m\l\5\t\n\f\2\k\h\o\2\h\i\e\w\a\s\3\j\4\4\k\u\j\x\h\1\5\k\6\s\c\z\l\c\j\8\7\d\1\a\d\2\9\z\c\f\5\g\7\b\e\d\d\e\o\r\a\e\5\6\e\k\r\o\2\e\b\n\n\2\d\y\v\g\9\d\y\v\n\j\d\p\z\3\f\2\j\n\m\7\j\q\5\c\f\9\x\b\l\4\i\0\r\8\v\g\c\l\f\f\f\p\r\u\z\a\y\v\6\7\8\x\8\h\d\2\j\6\5\d\5\b\7\e\k\q\2\8\a\a\k\9\a\4\m\3\c\9\q\2\o\t\f\c\9\o\1\u\o\u\r\f\k\5\j\2\i\k\w\4\w\0\6\a\9\l\e\l\t\h\2\n\5\d\v\u\q\9\i\u\z\g\g\1\t\x\s\i\e\l\f\9\9\p\u\s\o\n\s\d\w\p\k\v\p\w\y\l\1\8\5\o\6\d\2\h\u\7\m\3\e\n\s\b\v\5\x\4\0\0\8\r\0\n\o\9\3\8\j\4\y\y\u\c\r\4\5\7\k\g\d\x\y\c\f\a\2\2\8\p\0\f\c\g\a\2\8\q\k\g\x\a\7\q\6\1\v\b\2\7\7\e\k\b\c\1\r\n\4\7\5\d\i\8\c\0\4\5\4\r\q\0\e\v\z\s\u\m\j\0\j\8\o\0\0\c\q\8\x\9\2\b\g\w\n\v\w\e\f\v\7\y\z\q\i\k\i\e\9\9\4\w\l\u\i\t\2\h\e\c\3\p\f\c\w\x\0\u\h\v\f\q\3\k\j\9\n\e\m\7\a\e\m\e\u\5\5\x\d\d\m\9\6\s\v\z\v\d\l\n\2\a\q\t\2\6\1\u\h\9\i\9\a\m\x\8\o\a\7\w\w\g\q\a\g\i\c\g\6\r\3\h\m\a\c\2\d\c\v\3\w\v\h\8\y\e\d\q\w\w\g\r\6\7\y\z\o\9\7\9\2 ]] 01:16:28.811 05:11:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 01:16:28.812 05:11:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 01:16:28.812 [2024-12-09 05:11:11.058372] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:16:28.812 [2024-12-09 05:11:11.058460] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60693 ] 01:16:28.812 [2024-12-09 05:11:11.210666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:28.812 [2024-12-09 05:11:11.254002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:29.070 [2024-12-09 05:11:11.293857] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:16:29.070  [2024-12-09T05:11:11.526Z] Copying: 512/512 [B] (average 500 kBps) 01:16:29.070 01:16:29.328 05:11:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ cp9vijhzz21jlxak7fzs23kd19hw4mk846a2pa20y5oifg1vmiml5tnf2kho2hiewas3j44kujxh15k6sczlcj87d1ad29zcf5g7beddeorae56ekro2ebnn2dyvg9dyvnjdpz3f2jnm7jq5cf9xbl4i0r8vgclfffpruzayv678x8hd2j65d5b7ekq28aak9a4m3c9q2otfc9o1uourfk5j2ikw4w06a9lelth2n5dvuq9iuzgg1txsielf99pusonsdwpkvpwyl185o6d2hu7m3ensbv5x4008r0no938j4yyucr457kgdxycfa228p0fcga28qkgxa7q61vb277ekbc1rn475di8c0454rq0evzsumj0j8o00cq8x92bgwnvwefv7yzqikie994wluit2hec3pfcwx0uhvfq3kj9nem7aemeu55xddm96svzvdln2aqt261uh9i9amx8oa7wwgqagicg6r3hmac2dcv3wvh8yedqwwgr67yzo9792 == \c\p\9\v\i\j\h\z\z\2\1\j\l\x\a\k\7\f\z\s\2\3\k\d\1\9\h\w\4\m\k\8\4\6\a\2\p\a\2\0\y\5\o\i\f\g\1\v\m\i\m\l\5\t\n\f\2\k\h\o\2\h\i\e\w\a\s\3\j\4\4\k\u\j\x\h\1\5\k\6\s\c\z\l\c\j\8\7\d\1\a\d\2\9\z\c\f\5\g\7\b\e\d\d\e\o\r\a\e\5\6\e\k\r\o\2\e\b\n\n\2\d\y\v\g\9\d\y\v\n\j\d\p\z\3\f\2\j\n\m\7\j\q\5\c\f\9\x\b\l\4\i\0\r\8\v\g\c\l\f\f\f\p\r\u\z\a\y\v\6\7\8\x\8\h\d\2\j\6\5\d\5\b\7\e\k\q\2\8\a\a\k\9\a\4\m\3\c\9\q\2\o\t\f\c\9\o\1\u\o\u\r\f\k\5\j\2\i\k\w\4\w\0\6\a\9\l\e\l\t\h\2\n\5\d\v\u\q\9\i\u\z\g\g\1\t\x\s\i\e\l\f\9\9\p\u\s\o\n\s\d\w\p\k\v\p\w\y\l\1\8\5\o\6\d\2\h\u\7\m\3\e\n\s\b\v\5\x\4\0\0\8\r\0\n\o\9\3\8\j\4\y\y\u\c\r\4\5\7\k\g\d\x\y\c\f\a\2\2\8\p\0\f\c\g\a\2\8\q\k\g\x\a\7\q\6\1\v\b\2\7\7\e\k\b\c\1\r\n\4\7\5\d\i\8\c\0\4\5\4\r\q\0\e\v\z\s\u\m\j\0\j\8\o\0\0\c\q\8\x\9\2\b\g\w\n\v\w\e\f\v\7\y\z\q\i\k\i\e\9\9\4\w\l\u\i\t\2\h\e\c\3\p\f\c\w\x\0\u\h\v\f\q\3\k\j\9\n\e\m\7\a\e\m\e\u\5\5\x\d\d\m\9\6\s\v\z\v\d\l\n\2\a\q\t\2\6\1\u\h\9\i\9\a\m\x\8\o\a\7\w\w\g\q\a\g\i\c\g\6\r\3\h\m\a\c\2\d\c\v\3\w\v\h\8\y\e\d\q\w\w\g\r\6\7\y\z\o\9\7\9\2 ]] 01:16:29.328 05:11:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 01:16:29.328 05:11:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 01:16:29.328 [2024-12-09 05:11:11.581575] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:16:29.328 [2024-12-09 05:11:11.581642] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60701 ] 01:16:29.328 [2024-12-09 05:11:11.734459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:29.586 [2024-12-09 05:11:11.789389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:29.586 [2024-12-09 05:11:11.829077] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:16:29.586  [2024-12-09T05:11:12.306Z] Copying: 512/512 [B] (average 500 kBps) 01:16:29.850 01:16:29.850 05:11:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ cp9vijhzz21jlxak7fzs23kd19hw4mk846a2pa20y5oifg1vmiml5tnf2kho2hiewas3j44kujxh15k6sczlcj87d1ad29zcf5g7beddeorae56ekro2ebnn2dyvg9dyvnjdpz3f2jnm7jq5cf9xbl4i0r8vgclfffpruzayv678x8hd2j65d5b7ekq28aak9a4m3c9q2otfc9o1uourfk5j2ikw4w06a9lelth2n5dvuq9iuzgg1txsielf99pusonsdwpkvpwyl185o6d2hu7m3ensbv5x4008r0no938j4yyucr457kgdxycfa228p0fcga28qkgxa7q61vb277ekbc1rn475di8c0454rq0evzsumj0j8o00cq8x92bgwnvwefv7yzqikie994wluit2hec3pfcwx0uhvfq3kj9nem7aemeu55xddm96svzvdln2aqt261uh9i9amx8oa7wwgqagicg6r3hmac2dcv3wvh8yedqwwgr67yzo9792 == \c\p\9\v\i\j\h\z\z\2\1\j\l\x\a\k\7\f\z\s\2\3\k\d\1\9\h\w\4\m\k\8\4\6\a\2\p\a\2\0\y\5\o\i\f\g\1\v\m\i\m\l\5\t\n\f\2\k\h\o\2\h\i\e\w\a\s\3\j\4\4\k\u\j\x\h\1\5\k\6\s\c\z\l\c\j\8\7\d\1\a\d\2\9\z\c\f\5\g\7\b\e\d\d\e\o\r\a\e\5\6\e\k\r\o\2\e\b\n\n\2\d\y\v\g\9\d\y\v\n\j\d\p\z\3\f\2\j\n\m\7\j\q\5\c\f\9\x\b\l\4\i\0\r\8\v\g\c\l\f\f\f\p\r\u\z\a\y\v\6\7\8\x\8\h\d\2\j\6\5\d\5\b\7\e\k\q\2\8\a\a\k\9\a\4\m\3\c\9\q\2\o\t\f\c\9\o\1\u\o\u\r\f\k\5\j\2\i\k\w\4\w\0\6\a\9\l\e\l\t\h\2\n\5\d\v\u\q\9\i\u\z\g\g\1\t\x\s\i\e\l\f\9\9\p\u\s\o\n\s\d\w\p\k\v\p\w\y\l\1\8\5\o\6\d\2\h\u\7\m\3\e\n\s\b\v\5\x\4\0\0\8\r\0\n\o\9\3\8\j\4\y\y\u\c\r\4\5\7\k\g\d\x\y\c\f\a\2\2\8\p\0\f\c\g\a\2\8\q\k\g\x\a\7\q\6\1\v\b\2\7\7\e\k\b\c\1\r\n\4\7\5\d\i\8\c\0\4\5\4\r\q\0\e\v\z\s\u\m\j\0\j\8\o\0\0\c\q\8\x\9\2\b\g\w\n\v\w\e\f\v\7\y\z\q\i\k\i\e\9\9\4\w\l\u\i\t\2\h\e\c\3\p\f\c\w\x\0\u\h\v\f\q\3\k\j\9\n\e\m\7\a\e\m\e\u\5\5\x\d\d\m\9\6\s\v\z\v\d\l\n\2\a\q\t\2\6\1\u\h\9\i\9\a\m\x\8\o\a\7\w\w\g\q\a\g\i\c\g\6\r\3\h\m\a\c\2\d\c\v\3\w\v\h\8\y\e\d\q\w\w\g\r\6\7\y\z\o\9\7\9\2 ]] 01:16:29.850 05:11:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 01:16:29.850 05:11:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 01:16:29.850 [2024-12-09 05:11:12.116969] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:16:29.850 [2024-12-09 05:11:12.117054] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60708 ] 01:16:29.850 [2024-12-09 05:11:12.267621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:30.116 [2024-12-09 05:11:12.311827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:30.116 [2024-12-09 05:11:12.351430] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:16:30.116  [2024-12-09T05:11:12.832Z] Copying: 512/512 [B] (average 500 kBps) 01:16:30.376 01:16:30.376 05:11:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ cp9vijhzz21jlxak7fzs23kd19hw4mk846a2pa20y5oifg1vmiml5tnf2kho2hiewas3j44kujxh15k6sczlcj87d1ad29zcf5g7beddeorae56ekro2ebnn2dyvg9dyvnjdpz3f2jnm7jq5cf9xbl4i0r8vgclfffpruzayv678x8hd2j65d5b7ekq28aak9a4m3c9q2otfc9o1uourfk5j2ikw4w06a9lelth2n5dvuq9iuzgg1txsielf99pusonsdwpkvpwyl185o6d2hu7m3ensbv5x4008r0no938j4yyucr457kgdxycfa228p0fcga28qkgxa7q61vb277ekbc1rn475di8c0454rq0evzsumj0j8o00cq8x92bgwnvwefv7yzqikie994wluit2hec3pfcwx0uhvfq3kj9nem7aemeu55xddm96svzvdln2aqt261uh9i9amx8oa7wwgqagicg6r3hmac2dcv3wvh8yedqwwgr67yzo9792 == \c\p\9\v\i\j\h\z\z\2\1\j\l\x\a\k\7\f\z\s\2\3\k\d\1\9\h\w\4\m\k\8\4\6\a\2\p\a\2\0\y\5\o\i\f\g\1\v\m\i\m\l\5\t\n\f\2\k\h\o\2\h\i\e\w\a\s\3\j\4\4\k\u\j\x\h\1\5\k\6\s\c\z\l\c\j\8\7\d\1\a\d\2\9\z\c\f\5\g\7\b\e\d\d\e\o\r\a\e\5\6\e\k\r\o\2\e\b\n\n\2\d\y\v\g\9\d\y\v\n\j\d\p\z\3\f\2\j\n\m\7\j\q\5\c\f\9\x\b\l\4\i\0\r\8\v\g\c\l\f\f\f\p\r\u\z\a\y\v\6\7\8\x\8\h\d\2\j\6\5\d\5\b\7\e\k\q\2\8\a\a\k\9\a\4\m\3\c\9\q\2\o\t\f\c\9\o\1\u\o\u\r\f\k\5\j\2\i\k\w\4\w\0\6\a\9\l\e\l\t\h\2\n\5\d\v\u\q\9\i\u\z\g\g\1\t\x\s\i\e\l\f\9\9\p\u\s\o\n\s\d\w\p\k\v\p\w\y\l\1\8\5\o\6\d\2\h\u\7\m\3\e\n\s\b\v\5\x\4\0\0\8\r\0\n\o\9\3\8\j\4\y\y\u\c\r\4\5\7\k\g\d\x\y\c\f\a\2\2\8\p\0\f\c\g\a\2\8\q\k\g\x\a\7\q\6\1\v\b\2\7\7\e\k\b\c\1\r\n\4\7\5\d\i\8\c\0\4\5\4\r\q\0\e\v\z\s\u\m\j\0\j\8\o\0\0\c\q\8\x\9\2\b\g\w\n\v\w\e\f\v\7\y\z\q\i\k\i\e\9\9\4\w\l\u\i\t\2\h\e\c\3\p\f\c\w\x\0\u\h\v\f\q\3\k\j\9\n\e\m\7\a\e\m\e\u\5\5\x\d\d\m\9\6\s\v\z\v\d\l\n\2\a\q\t\2\6\1\u\h\9\i\9\a\m\x\8\o\a\7\w\w\g\q\a\g\i\c\g\6\r\3\h\m\a\c\2\d\c\v\3\w\v\h\8\y\e\d\q\w\w\g\r\6\7\y\z\o\9\7\9\2 ]] 01:16:30.376 05:11:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 01:16:30.376 05:11:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 01:16:30.376 05:11:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 01:16:30.376 05:11:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 01:16:30.376 05:11:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 01:16:30.376 05:11:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 01:16:30.376 [2024-12-09 05:11:12.621721] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:16:30.376 [2024-12-09 05:11:12.621817] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60716 ] 01:16:30.376 [2024-12-09 05:11:12.773404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:30.376 [2024-12-09 05:11:12.829238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:30.635 [2024-12-09 05:11:12.868988] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:16:30.635  [2024-12-09T05:11:13.351Z] Copying: 512/512 [B] (average 500 kBps) 01:16:30.895 01:16:30.895 05:11:13 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ as6g4rx05qs2t7ayprzvwdk61ue5vqf2yxznike9g1077az31sudjvmgbnllhlppsiqfq3uzuwqjx8pyaioi8k3ii16t6znl36xgv32kmv2c6jua8mylsf0hhw8mgs69ay3dpjwculv2eb7v524jrmz51lrb0pupmarn4umv4efs9c7pk64aswk8gynvi2d4cxq18moi443n3n0swzot8vmd7cs25dbk7x1tczetmgbaloqrnczu8recjlmnoypvaaenuzoen053p9qbrap9lfrhisv9of7joe42vnzevm9o1hgsour9aga0rybfsnlg8x8o2hp5uthdb10qj8x1k08kfbflizi0d3qfrxzkux0cfmx0by4mx046qdud0mguzi77ruxsg6fli895d1qkh0auqw38l5bsbo2rd4d10wtn88e57iq40ekf429uieh01y68mj6kcu8imi6b78vufv1c08i8mjfho6bjnvx14ld0fj84mw2fk4b6je0bo9qu == \a\s\6\g\4\r\x\0\5\q\s\2\t\7\a\y\p\r\z\v\w\d\k\6\1\u\e\5\v\q\f\2\y\x\z\n\i\k\e\9\g\1\0\7\7\a\z\3\1\s\u\d\j\v\m\g\b\n\l\l\h\l\p\p\s\i\q\f\q\3\u\z\u\w\q\j\x\8\p\y\a\i\o\i\8\k\3\i\i\1\6\t\6\z\n\l\3\6\x\g\v\3\2\k\m\v\2\c\6\j\u\a\8\m\y\l\s\f\0\h\h\w\8\m\g\s\6\9\a\y\3\d\p\j\w\c\u\l\v\2\e\b\7\v\5\2\4\j\r\m\z\5\1\l\r\b\0\p\u\p\m\a\r\n\4\u\m\v\4\e\f\s\9\c\7\p\k\6\4\a\s\w\k\8\g\y\n\v\i\2\d\4\c\x\q\1\8\m\o\i\4\4\3\n\3\n\0\s\w\z\o\t\8\v\m\d\7\c\s\2\5\d\b\k\7\x\1\t\c\z\e\t\m\g\b\a\l\o\q\r\n\c\z\u\8\r\e\c\j\l\m\n\o\y\p\v\a\a\e\n\u\z\o\e\n\0\5\3\p\9\q\b\r\a\p\9\l\f\r\h\i\s\v\9\o\f\7\j\o\e\4\2\v\n\z\e\v\m\9\o\1\h\g\s\o\u\r\9\a\g\a\0\r\y\b\f\s\n\l\g\8\x\8\o\2\h\p\5\u\t\h\d\b\1\0\q\j\8\x\1\k\0\8\k\f\b\f\l\i\z\i\0\d\3\q\f\r\x\z\k\u\x\0\c\f\m\x\0\b\y\4\m\x\0\4\6\q\d\u\d\0\m\g\u\z\i\7\7\r\u\x\s\g\6\f\l\i\8\9\5\d\1\q\k\h\0\a\u\q\w\3\8\l\5\b\s\b\o\2\r\d\4\d\1\0\w\t\n\8\8\e\5\7\i\q\4\0\e\k\f\4\2\9\u\i\e\h\0\1\y\6\8\m\j\6\k\c\u\8\i\m\i\6\b\7\8\v\u\f\v\1\c\0\8\i\8\m\j\f\h\o\6\b\j\n\v\x\1\4\l\d\0\f\j\8\4\m\w\2\f\k\4\b\6\j\e\0\b\o\9\q\u ]] 01:16:30.895 05:11:13 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 01:16:30.895 05:11:13 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 01:16:30.895 [2024-12-09 05:11:13.136698] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:16:30.895 [2024-12-09 05:11:13.136765] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60723 ] 01:16:30.895 [2024-12-09 05:11:13.283938] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:30.895 [2024-12-09 05:11:13.337144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:31.154 [2024-12-09 05:11:13.377279] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:16:31.154  [2024-12-09T05:11:13.610Z] Copying: 512/512 [B] (average 500 kBps) 01:16:31.154 01:16:31.412 05:11:13 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ as6g4rx05qs2t7ayprzvwdk61ue5vqf2yxznike9g1077az31sudjvmgbnllhlppsiqfq3uzuwqjx8pyaioi8k3ii16t6znl36xgv32kmv2c6jua8mylsf0hhw8mgs69ay3dpjwculv2eb7v524jrmz51lrb0pupmarn4umv4efs9c7pk64aswk8gynvi2d4cxq18moi443n3n0swzot8vmd7cs25dbk7x1tczetmgbaloqrnczu8recjlmnoypvaaenuzoen053p9qbrap9lfrhisv9of7joe42vnzevm9o1hgsour9aga0rybfsnlg8x8o2hp5uthdb10qj8x1k08kfbflizi0d3qfrxzkux0cfmx0by4mx046qdud0mguzi77ruxsg6fli895d1qkh0auqw38l5bsbo2rd4d10wtn88e57iq40ekf429uieh01y68mj6kcu8imi6b78vufv1c08i8mjfho6bjnvx14ld0fj84mw2fk4b6je0bo9qu == \a\s\6\g\4\r\x\0\5\q\s\2\t\7\a\y\p\r\z\v\w\d\k\6\1\u\e\5\v\q\f\2\y\x\z\n\i\k\e\9\g\1\0\7\7\a\z\3\1\s\u\d\j\v\m\g\b\n\l\l\h\l\p\p\s\i\q\f\q\3\u\z\u\w\q\j\x\8\p\y\a\i\o\i\8\k\3\i\i\1\6\t\6\z\n\l\3\6\x\g\v\3\2\k\m\v\2\c\6\j\u\a\8\m\y\l\s\f\0\h\h\w\8\m\g\s\6\9\a\y\3\d\p\j\w\c\u\l\v\2\e\b\7\v\5\2\4\j\r\m\z\5\1\l\r\b\0\p\u\p\m\a\r\n\4\u\m\v\4\e\f\s\9\c\7\p\k\6\4\a\s\w\k\8\g\y\n\v\i\2\d\4\c\x\q\1\8\m\o\i\4\4\3\n\3\n\0\s\w\z\o\t\8\v\m\d\7\c\s\2\5\d\b\k\7\x\1\t\c\z\e\t\m\g\b\a\l\o\q\r\n\c\z\u\8\r\e\c\j\l\m\n\o\y\p\v\a\a\e\n\u\z\o\e\n\0\5\3\p\9\q\b\r\a\p\9\l\f\r\h\i\s\v\9\o\f\7\j\o\e\4\2\v\n\z\e\v\m\9\o\1\h\g\s\o\u\r\9\a\g\a\0\r\y\b\f\s\n\l\g\8\x\8\o\2\h\p\5\u\t\h\d\b\1\0\q\j\8\x\1\k\0\8\k\f\b\f\l\i\z\i\0\d\3\q\f\r\x\z\k\u\x\0\c\f\m\x\0\b\y\4\m\x\0\4\6\q\d\u\d\0\m\g\u\z\i\7\7\r\u\x\s\g\6\f\l\i\8\9\5\d\1\q\k\h\0\a\u\q\w\3\8\l\5\b\s\b\o\2\r\d\4\d\1\0\w\t\n\8\8\e\5\7\i\q\4\0\e\k\f\4\2\9\u\i\e\h\0\1\y\6\8\m\j\6\k\c\u\8\i\m\i\6\b\7\8\v\u\f\v\1\c\0\8\i\8\m\j\f\h\o\6\b\j\n\v\x\1\4\l\d\0\f\j\8\4\m\w\2\f\k\4\b\6\j\e\0\b\o\9\q\u ]] 01:16:31.412 05:11:13 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 01:16:31.412 05:11:13 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 01:16:31.412 [2024-12-09 05:11:13.663880] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:16:31.412 [2024-12-09 05:11:13.663942] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60731 ] 01:16:31.412 [2024-12-09 05:11:13.815932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:31.670 [2024-12-09 05:11:13.873332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:31.670 [2024-12-09 05:11:13.915047] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:16:31.670  [2024-12-09T05:11:14.384Z] Copying: 512/512 [B] (average 500 kBps) 01:16:31.928 01:16:31.928 05:11:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ as6g4rx05qs2t7ayprzvwdk61ue5vqf2yxznike9g1077az31sudjvmgbnllhlppsiqfq3uzuwqjx8pyaioi8k3ii16t6znl36xgv32kmv2c6jua8mylsf0hhw8mgs69ay3dpjwculv2eb7v524jrmz51lrb0pupmarn4umv4efs9c7pk64aswk8gynvi2d4cxq18moi443n3n0swzot8vmd7cs25dbk7x1tczetmgbaloqrnczu8recjlmnoypvaaenuzoen053p9qbrap9lfrhisv9of7joe42vnzevm9o1hgsour9aga0rybfsnlg8x8o2hp5uthdb10qj8x1k08kfbflizi0d3qfrxzkux0cfmx0by4mx046qdud0mguzi77ruxsg6fli895d1qkh0auqw38l5bsbo2rd4d10wtn88e57iq40ekf429uieh01y68mj6kcu8imi6b78vufv1c08i8mjfho6bjnvx14ld0fj84mw2fk4b6je0bo9qu == \a\s\6\g\4\r\x\0\5\q\s\2\t\7\a\y\p\r\z\v\w\d\k\6\1\u\e\5\v\q\f\2\y\x\z\n\i\k\e\9\g\1\0\7\7\a\z\3\1\s\u\d\j\v\m\g\b\n\l\l\h\l\p\p\s\i\q\f\q\3\u\z\u\w\q\j\x\8\p\y\a\i\o\i\8\k\3\i\i\1\6\t\6\z\n\l\3\6\x\g\v\3\2\k\m\v\2\c\6\j\u\a\8\m\y\l\s\f\0\h\h\w\8\m\g\s\6\9\a\y\3\d\p\j\w\c\u\l\v\2\e\b\7\v\5\2\4\j\r\m\z\5\1\l\r\b\0\p\u\p\m\a\r\n\4\u\m\v\4\e\f\s\9\c\7\p\k\6\4\a\s\w\k\8\g\y\n\v\i\2\d\4\c\x\q\1\8\m\o\i\4\4\3\n\3\n\0\s\w\z\o\t\8\v\m\d\7\c\s\2\5\d\b\k\7\x\1\t\c\z\e\t\m\g\b\a\l\o\q\r\n\c\z\u\8\r\e\c\j\l\m\n\o\y\p\v\a\a\e\n\u\z\o\e\n\0\5\3\p\9\q\b\r\a\p\9\l\f\r\h\i\s\v\9\o\f\7\j\o\e\4\2\v\n\z\e\v\m\9\o\1\h\g\s\o\u\r\9\a\g\a\0\r\y\b\f\s\n\l\g\8\x\8\o\2\h\p\5\u\t\h\d\b\1\0\q\j\8\x\1\k\0\8\k\f\b\f\l\i\z\i\0\d\3\q\f\r\x\z\k\u\x\0\c\f\m\x\0\b\y\4\m\x\0\4\6\q\d\u\d\0\m\g\u\z\i\7\7\r\u\x\s\g\6\f\l\i\8\9\5\d\1\q\k\h\0\a\u\q\w\3\8\l\5\b\s\b\o\2\r\d\4\d\1\0\w\t\n\8\8\e\5\7\i\q\4\0\e\k\f\4\2\9\u\i\e\h\0\1\y\6\8\m\j\6\k\c\u\8\i\m\i\6\b\7\8\v\u\f\v\1\c\0\8\i\8\m\j\f\h\o\6\b\j\n\v\x\1\4\l\d\0\f\j\8\4\m\w\2\f\k\4\b\6\j\e\0\b\o\9\q\u ]] 01:16:31.928 05:11:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 01:16:31.928 05:11:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 01:16:31.928 [2024-12-09 05:11:14.202767] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:16:31.928 [2024-12-09 05:11:14.202838] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60738 ] 01:16:31.928 [2024-12-09 05:11:14.356433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:32.187 [2024-12-09 05:11:14.409930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:32.187 [2024-12-09 05:11:14.450478] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:16:32.187  [2024-12-09T05:11:14.901Z] Copying: 512/512 [B] (average 250 kBps) 01:16:32.445 01:16:32.445 05:11:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ as6g4rx05qs2t7ayprzvwdk61ue5vqf2yxznike9g1077az31sudjvmgbnllhlppsiqfq3uzuwqjx8pyaioi8k3ii16t6znl36xgv32kmv2c6jua8mylsf0hhw8mgs69ay3dpjwculv2eb7v524jrmz51lrb0pupmarn4umv4efs9c7pk64aswk8gynvi2d4cxq18moi443n3n0swzot8vmd7cs25dbk7x1tczetmgbaloqrnczu8recjlmnoypvaaenuzoen053p9qbrap9lfrhisv9of7joe42vnzevm9o1hgsour9aga0rybfsnlg8x8o2hp5uthdb10qj8x1k08kfbflizi0d3qfrxzkux0cfmx0by4mx046qdud0mguzi77ruxsg6fli895d1qkh0auqw38l5bsbo2rd4d10wtn88e57iq40ekf429uieh01y68mj6kcu8imi6b78vufv1c08i8mjfho6bjnvx14ld0fj84mw2fk4b6je0bo9qu == \a\s\6\g\4\r\x\0\5\q\s\2\t\7\a\y\p\r\z\v\w\d\k\6\1\u\e\5\v\q\f\2\y\x\z\n\i\k\e\9\g\1\0\7\7\a\z\3\1\s\u\d\j\v\m\g\b\n\l\l\h\l\p\p\s\i\q\f\q\3\u\z\u\w\q\j\x\8\p\y\a\i\o\i\8\k\3\i\i\1\6\t\6\z\n\l\3\6\x\g\v\3\2\k\m\v\2\c\6\j\u\a\8\m\y\l\s\f\0\h\h\w\8\m\g\s\6\9\a\y\3\d\p\j\w\c\u\l\v\2\e\b\7\v\5\2\4\j\r\m\z\5\1\l\r\b\0\p\u\p\m\a\r\n\4\u\m\v\4\e\f\s\9\c\7\p\k\6\4\a\s\w\k\8\g\y\n\v\i\2\d\4\c\x\q\1\8\m\o\i\4\4\3\n\3\n\0\s\w\z\o\t\8\v\m\d\7\c\s\2\5\d\b\k\7\x\1\t\c\z\e\t\m\g\b\a\l\o\q\r\n\c\z\u\8\r\e\c\j\l\m\n\o\y\p\v\a\a\e\n\u\z\o\e\n\0\5\3\p\9\q\b\r\a\p\9\l\f\r\h\i\s\v\9\o\f\7\j\o\e\4\2\v\n\z\e\v\m\9\o\1\h\g\s\o\u\r\9\a\g\a\0\r\y\b\f\s\n\l\g\8\x\8\o\2\h\p\5\u\t\h\d\b\1\0\q\j\8\x\1\k\0\8\k\f\b\f\l\i\z\i\0\d\3\q\f\r\x\z\k\u\x\0\c\f\m\x\0\b\y\4\m\x\0\4\6\q\d\u\d\0\m\g\u\z\i\7\7\r\u\x\s\g\6\f\l\i\8\9\5\d\1\q\k\h\0\a\u\q\w\3\8\l\5\b\s\b\o\2\r\d\4\d\1\0\w\t\n\8\8\e\5\7\i\q\4\0\e\k\f\4\2\9\u\i\e\h\0\1\y\6\8\m\j\6\k\c\u\8\i\m\i\6\b\7\8\v\u\f\v\1\c\0\8\i\8\m\j\f\h\o\6\b\j\n\v\x\1\4\l\d\0\f\j\8\4\m\w\2\f\k\4\b\6\j\e\0\b\o\9\q\u ]] 01:16:32.445 01:16:32.445 real 0m4.228s 01:16:32.445 user 0m2.374s 01:16:32.445 sys 0m0.889s 01:16:32.445 ************************************ 01:16:32.445 END TEST dd_flags_misc_forced_aio 01:16:32.445 ************************************ 01:16:32.445 05:11:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 01:16:32.445 05:11:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 01:16:32.445 05:11:14 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 01:16:32.445 05:11:14 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 01:16:32.445 05:11:14 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 01:16:32.445 01:16:32.445 real 0m19.929s 01:16:32.445 user 0m10.031s 01:16:32.445 sys 0m5.654s 01:16:32.445 05:11:14 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1130 -- # xtrace_disable 01:16:32.445 05:11:14 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 01:16:32.445 ************************************ 01:16:32.445 END TEST spdk_dd_posix 01:16:32.445 ************************************ 01:16:32.445 05:11:14 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 01:16:32.445 05:11:14 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:16:32.445 05:11:14 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 01:16:32.445 05:11:14 spdk_dd -- common/autotest_common.sh@10 -- # set +x 01:16:32.445 ************************************ 01:16:32.445 START TEST spdk_dd_malloc 01:16:32.445 ************************************ 01:16:32.445 05:11:14 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 01:16:32.704 * Looking for test storage... 01:16:32.704 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 01:16:32.704 05:11:14 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:16:32.704 05:11:14 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # lcov --version 01:16:32.704 05:11:14 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:16:32.704 05:11:14 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:16:32.704 05:11:14 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:16:32.704 05:11:14 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l 01:16:32.704 05:11:14 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l 01:16:32.704 05:11:14 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-: 01:16:32.704 05:11:14 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1 01:16:32.704 05:11:14 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-: 01:16:32.704 05:11:14 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2 01:16:32.704 05:11:14 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<' 01:16:32.704 05:11:14 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2 01:16:32.704 05:11:14 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1 01:16:32.704 05:11:14 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:16:32.704 05:11:14 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in 01:16:32.704 05:11:14 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1 01:16:32.704 05:11:14 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 )) 01:16:32.704 05:11:14 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:16:32.704 05:11:14 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1 01:16:32.704 05:11:14 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1 01:16:32.704 05:11:14 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:16:32.704 05:11:14 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1 01:16:32.704 05:11:15 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1 01:16:32.704 05:11:15 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2 01:16:32.704 05:11:15 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2 01:16:32.704 05:11:15 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:16:32.704 05:11:15 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2 01:16:32.704 05:11:15 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2 01:16:32.704 05:11:15 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:16:32.704 05:11:15 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:16:32.704 05:11:15 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0 01:16:32.704 05:11:15 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:16:32.704 05:11:15 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:16:32.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:16:32.704 --rc genhtml_branch_coverage=1 01:16:32.704 --rc genhtml_function_coverage=1 01:16:32.704 --rc genhtml_legend=1 01:16:32.704 --rc geninfo_all_blocks=1 01:16:32.704 --rc geninfo_unexecuted_blocks=1 01:16:32.704 01:16:32.704 ' 01:16:32.704 05:11:15 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:16:32.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:16:32.704 --rc genhtml_branch_coverage=1 01:16:32.704 --rc genhtml_function_coverage=1 01:16:32.704 --rc genhtml_legend=1 01:16:32.704 --rc geninfo_all_blocks=1 01:16:32.704 --rc geninfo_unexecuted_blocks=1 01:16:32.704 01:16:32.704 ' 01:16:32.704 05:11:15 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:16:32.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:16:32.704 --rc genhtml_branch_coverage=1 01:16:32.704 --rc genhtml_function_coverage=1 01:16:32.704 --rc genhtml_legend=1 01:16:32.704 --rc geninfo_all_blocks=1 01:16:32.704 --rc geninfo_unexecuted_blocks=1 01:16:32.704 01:16:32.704 ' 01:16:32.704 05:11:15 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:16:32.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:16:32.704 --rc genhtml_branch_coverage=1 01:16:32.704 --rc genhtml_function_coverage=1 01:16:32.704 --rc genhtml_legend=1 01:16:32.704 --rc geninfo_all_blocks=1 01:16:32.704 --rc geninfo_unexecuted_blocks=1 01:16:32.704 01:16:32.704 ' 01:16:32.704 05:11:15 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:16:32.704 05:11:15 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob 01:16:32.705 05:11:15 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:16:32.705 05:11:15 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:16:32.705 05:11:15 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:16:32.705 05:11:15 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:16:32.705 05:11:15 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:16:32.705 05:11:15 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:16:32.705 05:11:15 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 01:16:32.705 05:11:15 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:16:32.705 05:11:15 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 01:16:32.705 05:11:15 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:16:32.705 05:11:15 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1111 -- # xtrace_disable 01:16:32.705 05:11:15 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 01:16:32.705 ************************************ 01:16:32.705 START TEST dd_malloc_copy 01:16:32.705 ************************************ 01:16:32.705 05:11:15 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1129 -- # malloc_copy 01:16:32.705 05:11:15 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 01:16:32.705 05:11:15 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 01:16:32.705 05:11:15 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 01:16:32.705 05:11:15 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 01:16:32.705 05:11:15 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 01:16:32.705 05:11:15 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 01:16:32.705 05:11:15 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 01:16:32.705 05:11:15 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 01:16:32.705 05:11:15 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 01:16:32.705 05:11:15 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 01:16:32.705 { 01:16:32.705 "subsystems": [ 01:16:32.705 { 01:16:32.705 "subsystem": "bdev", 01:16:32.705 "config": [ 01:16:32.705 { 01:16:32.705 "params": { 01:16:32.705 "block_size": 512, 01:16:32.705 "num_blocks": 1048576, 01:16:32.705 "name": "malloc0" 01:16:32.705 }, 01:16:32.705 "method": "bdev_malloc_create" 01:16:32.705 }, 01:16:32.705 { 01:16:32.705 "params": { 01:16:32.705 "block_size": 512, 01:16:32.705 "num_blocks": 1048576, 01:16:32.705 "name": "malloc1" 01:16:32.705 }, 01:16:32.705 "method": "bdev_malloc_create" 01:16:32.705 }, 01:16:32.705 { 01:16:32.705 "method": "bdev_wait_for_examine" 01:16:32.705 } 01:16:32.705 ] 01:16:32.705 } 01:16:32.705 ] 01:16:32.705 } 01:16:32.705 [2024-12-09 05:11:15.091005] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:16:32.705 [2024-12-09 05:11:15.091184] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60820 ] 01:16:33.059 [2024-12-09 05:11:15.247187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:33.059 [2024-12-09 05:11:15.299404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:33.059 [2024-12-09 05:11:15.340059] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:16:34.432  [2024-12-09T05:11:17.825Z] Copying: 235/512 [MB] (235 MBps) [2024-12-09T05:11:17.825Z] Copying: 465/512 [MB] (230 MBps) [2024-12-09T05:11:18.392Z] Copying: 512/512 [MB] (average 233 MBps) 01:16:35.936 01:16:35.936 05:11:18 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 01:16:35.936 05:11:18 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 01:16:35.936 05:11:18 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 01:16:35.936 05:11:18 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 01:16:35.936 [2024-12-09 05:11:18.346432] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:16:35.936 [2024-12-09 05:11:18.346577] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60862 ] 01:16:35.936 { 01:16:35.936 "subsystems": [ 01:16:35.936 { 01:16:35.936 "subsystem": "bdev", 01:16:35.936 "config": [ 01:16:35.936 { 01:16:35.936 "params": { 01:16:35.936 "block_size": 512, 01:16:35.936 "num_blocks": 1048576, 01:16:35.936 "name": "malloc0" 01:16:35.936 }, 01:16:35.936 "method": "bdev_malloc_create" 01:16:35.936 }, 01:16:35.936 { 01:16:35.936 "params": { 01:16:35.936 "block_size": 512, 01:16:35.937 "num_blocks": 1048576, 01:16:35.937 "name": "malloc1" 01:16:35.937 }, 01:16:35.937 "method": "bdev_malloc_create" 01:16:35.937 }, 01:16:35.937 { 01:16:35.937 "method": "bdev_wait_for_examine" 01:16:35.937 } 01:16:35.937 ] 01:16:35.937 } 01:16:35.937 ] 01:16:35.937 } 01:16:36.196 [2024-12-09 05:11:18.489420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:36.196 [2024-12-09 05:11:18.542977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:36.196 [2024-12-09 05:11:18.585349] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:16:37.587  [2024-12-09T05:11:20.977Z] Copying: 258/512 [MB] (258 MBps) [2024-12-09T05:11:20.978Z] Copying: 509/512 [MB] (250 MBps) [2024-12-09T05:11:21.546Z] Copying: 512/512 [MB] (average 255 MBps) 01:16:39.090 01:16:39.090 01:16:39.090 real 0m6.316s 01:16:39.090 user 0m5.484s 01:16:39.090 sys 0m0.659s 01:16:39.090 05:11:21 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 01:16:39.090 05:11:21 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 01:16:39.090 ************************************ 01:16:39.090 END TEST dd_malloc_copy 01:16:39.090 ************************************ 01:16:39.090 ************************************ 01:16:39.090 END TEST spdk_dd_malloc 01:16:39.090 ************************************ 01:16:39.090 01:16:39.090 real 0m6.598s 01:16:39.090 user 0m5.625s 01:16:39.090 sys 0m0.813s 01:16:39.090 05:11:21 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1130 -- # xtrace_disable 01:16:39.090 05:11:21 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 01:16:39.090 05:11:21 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 01:16:39.090 05:11:21 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:16:39.090 05:11:21 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 01:16:39.090 05:11:21 spdk_dd -- common/autotest_common.sh@10 -- # set +x 01:16:39.090 ************************************ 01:16:39.090 START TEST spdk_dd_bdev_to_bdev 01:16:39.090 ************************************ 01:16:39.090 05:11:21 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 01:16:39.349 * Looking for test storage... 01:16:39.349 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 01:16:39.349 05:11:21 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:16:39.349 05:11:21 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # lcov --version 01:16:39.349 05:11:21 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:16:39.349 05:11:21 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:16:39.349 05:11:21 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:16:39.349 05:11:21 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l 01:16:39.349 05:11:21 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l 01:16:39.349 05:11:21 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-: 01:16:39.349 05:11:21 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1 01:16:39.349 05:11:21 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-: 01:16:39.349 05:11:21 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2 01:16:39.349 05:11:21 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<' 01:16:39.349 05:11:21 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2 01:16:39.349 05:11:21 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1 01:16:39.349 05:11:21 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:16:39.349 05:11:21 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in 01:16:39.349 05:11:21 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1 01:16:39.349 05:11:21 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 )) 01:16:39.349 05:11:21 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:16:39.349 05:11:21 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1 01:16:39.349 05:11:21 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1 01:16:39.349 05:11:21 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:16:39.349 05:11:21 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1 01:16:39.349 05:11:21 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1 01:16:39.349 05:11:21 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2 01:16:39.349 05:11:21 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2 01:16:39.349 05:11:21 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:16:39.349 05:11:21 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2 01:16:39.349 05:11:21 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2 01:16:39.349 05:11:21 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:16:39.349 05:11:21 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:16:39.349 05:11:21 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0 01:16:39.350 05:11:21 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:16:39.350 05:11:21 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:16:39.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:16:39.350 --rc genhtml_branch_coverage=1 01:16:39.350 --rc genhtml_function_coverage=1 01:16:39.350 --rc genhtml_legend=1 01:16:39.350 --rc geninfo_all_blocks=1 01:16:39.350 --rc geninfo_unexecuted_blocks=1 01:16:39.350 01:16:39.350 ' 01:16:39.350 05:11:21 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:16:39.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:16:39.350 --rc genhtml_branch_coverage=1 01:16:39.350 --rc genhtml_function_coverage=1 01:16:39.350 --rc genhtml_legend=1 01:16:39.350 --rc geninfo_all_blocks=1 01:16:39.350 --rc geninfo_unexecuted_blocks=1 01:16:39.350 01:16:39.350 ' 01:16:39.350 05:11:21 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:16:39.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:16:39.350 --rc genhtml_branch_coverage=1 01:16:39.350 --rc genhtml_function_coverage=1 01:16:39.350 --rc genhtml_legend=1 01:16:39.350 --rc geninfo_all_blocks=1 01:16:39.350 --rc geninfo_unexecuted_blocks=1 01:16:39.350 01:16:39.350 ' 01:16:39.350 05:11:21 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:16:39.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:16:39.350 --rc genhtml_branch_coverage=1 01:16:39.350 --rc genhtml_function_coverage=1 01:16:39.350 --rc genhtml_legend=1 01:16:39.350 --rc geninfo_all_blocks=1 01:16:39.350 --rc geninfo_unexecuted_blocks=1 01:16:39.350 01:16:39.350 ' 01:16:39.350 05:11:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:16:39.350 05:11:21 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob 01:16:39.350 05:11:21 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:16:39.350 05:11:21 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:16:39.350 05:11:21 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:16:39.350 05:11:21 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:16:39.350 05:11:21 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:16:39.350 05:11:21 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:16:39.350 05:11:21 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 01:16:39.350 05:11:21 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:16:39.350 05:11:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 01:16:39.350 05:11:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 01:16:39.350 05:11:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 01:16:39.350 05:11:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 01:16:39.350 05:11:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 01:16:39.350 05:11:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 01:16:39.350 05:11:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 01:16:39.350 05:11:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 01:16:39.350 05:11:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 01:16:39.350 05:11:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 01:16:39.350 05:11:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 01:16:39.350 05:11:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 01:16:39.350 05:11:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 01:16:39.350 05:11:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 01:16:39.350 05:11:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 01:16:39.350 05:11:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:16:39.350 05:11:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 01:16:39.350 05:11:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 01:16:39.350 05:11:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 01:16:39.350 05:11:21 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 01:16:39.350 05:11:21 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 01:16:39.350 05:11:21 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 01:16:39.350 ************************************ 01:16:39.350 START TEST dd_inflate_file 01:16:39.350 ************************************ 01:16:39.350 05:11:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 01:16:39.350 [2024-12-09 05:11:21.710597] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:16:39.350 [2024-12-09 05:11:21.710722] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60975 ] 01:16:39.608 [2024-12-09 05:11:21.861450] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:39.608 [2024-12-09 05:11:21.914187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:39.608 [2024-12-09 05:11:21.954695] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:16:39.608  [2024-12-09T05:11:22.324Z] Copying: 64/64 [MB] (average 1454 MBps) 01:16:39.868 01:16:39.868 01:16:39.868 real 0m0.546s 01:16:39.868 user 0m0.339s 01:16:39.868 sys 0m0.255s 01:16:39.868 05:11:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1130 -- # xtrace_disable 01:16:39.868 05:11:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 01:16:39.868 ************************************ 01:16:39.868 END TEST dd_inflate_file 01:16:39.868 ************************************ 01:16:39.868 05:11:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 01:16:39.868 05:11:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 01:16:39.868 05:11:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 01:16:39.868 05:11:22 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 01:16:39.868 05:11:22 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 01:16:39.868 05:11:22 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 01:16:39.868 05:11:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 01:16:39.868 05:11:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 01:16:39.868 05:11:22 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 01:16:39.868 ************************************ 01:16:39.868 START TEST dd_copy_to_out_bdev 01:16:39.868 ************************************ 01:16:39.868 05:11:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 01:16:40.127 [2024-12-09 05:11:22.343171] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:16:40.127 [2024-12-09 05:11:22.343229] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61005 ] 01:16:40.127 { 01:16:40.127 "subsystems": [ 01:16:40.127 { 01:16:40.127 "subsystem": "bdev", 01:16:40.127 "config": [ 01:16:40.127 { 01:16:40.127 "params": { 01:16:40.127 "trtype": "pcie", 01:16:40.127 "traddr": "0000:00:10.0", 01:16:40.127 "name": "Nvme0" 01:16:40.127 }, 01:16:40.127 "method": "bdev_nvme_attach_controller" 01:16:40.127 }, 01:16:40.127 { 01:16:40.127 "params": { 01:16:40.127 "trtype": "pcie", 01:16:40.127 "traddr": "0000:00:11.0", 01:16:40.127 "name": "Nvme1" 01:16:40.127 }, 01:16:40.127 "method": "bdev_nvme_attach_controller" 01:16:40.127 }, 01:16:40.127 { 01:16:40.127 "method": "bdev_wait_for_examine" 01:16:40.127 } 01:16:40.127 ] 01:16:40.127 } 01:16:40.127 ] 01:16:40.127 } 01:16:40.127 [2024-12-09 05:11:22.493974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:40.127 [2024-12-09 05:11:22.541274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:40.386 [2024-12-09 05:11:22.582564] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:16:41.323  [2024-12-09T05:11:23.779Z] Copying: 64/64 [MB] (average 95 MBps) 01:16:41.323 01:16:41.323 01:16:41.323 real 0m1.387s 01:16:41.323 user 0m1.167s 01:16:41.323 sys 0m1.020s 01:16:41.323 05:11:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 01:16:41.323 05:11:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 01:16:41.323 ************************************ 01:16:41.323 END TEST dd_copy_to_out_bdev 01:16:41.323 ************************************ 01:16:41.323 05:11:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 01:16:41.323 05:11:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 01:16:41.323 05:11:23 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:16:41.323 05:11:23 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 01:16:41.323 05:11:23 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 01:16:41.323 ************************************ 01:16:41.323 START TEST dd_offset_magic 01:16:41.323 ************************************ 01:16:41.323 05:11:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1129 -- # offset_magic 01:16:41.323 05:11:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 01:16:41.323 05:11:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 01:16:41.323 05:11:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 01:16:41.323 05:11:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 01:16:41.323 05:11:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 01:16:41.323 05:11:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 01:16:41.323 05:11:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 01:16:41.323 05:11:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 01:16:41.581 [2024-12-09 05:11:23.808895] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:16:41.581 [2024-12-09 05:11:23.809013] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61049 ] 01:16:41.581 { 01:16:41.581 "subsystems": [ 01:16:41.581 { 01:16:41.581 "subsystem": "bdev", 01:16:41.581 "config": [ 01:16:41.581 { 01:16:41.581 "params": { 01:16:41.581 "trtype": "pcie", 01:16:41.581 "traddr": "0000:00:10.0", 01:16:41.581 "name": "Nvme0" 01:16:41.581 }, 01:16:41.581 "method": "bdev_nvme_attach_controller" 01:16:41.581 }, 01:16:41.581 { 01:16:41.581 "params": { 01:16:41.581 "trtype": "pcie", 01:16:41.581 "traddr": "0000:00:11.0", 01:16:41.581 "name": "Nvme1" 01:16:41.581 }, 01:16:41.581 "method": "bdev_nvme_attach_controller" 01:16:41.581 }, 01:16:41.581 { 01:16:41.581 "method": "bdev_wait_for_examine" 01:16:41.581 } 01:16:41.581 ] 01:16:41.581 } 01:16:41.581 ] 01:16:41.581 } 01:16:41.581 [2024-12-09 05:11:23.961678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:41.581 [2024-12-09 05:11:24.015314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:41.871 [2024-12-09 05:11:24.058301] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:16:41.871  [2024-12-09T05:11:24.587Z] Copying: 65/65 [MB] (average 698 MBps) 01:16:42.131 01:16:42.131 05:11:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 01:16:42.131 05:11:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 01:16:42.131 05:11:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 01:16:42.131 05:11:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 01:16:42.391 [2024-12-09 05:11:24.630469] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:16:42.391 [2024-12-09 05:11:24.630617] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61063 ] 01:16:42.391 { 01:16:42.391 "subsystems": [ 01:16:42.391 { 01:16:42.391 "subsystem": "bdev", 01:16:42.391 "config": [ 01:16:42.391 { 01:16:42.391 "params": { 01:16:42.391 "trtype": "pcie", 01:16:42.391 "traddr": "0000:00:10.0", 01:16:42.391 "name": "Nvme0" 01:16:42.391 }, 01:16:42.391 "method": "bdev_nvme_attach_controller" 01:16:42.391 }, 01:16:42.391 { 01:16:42.391 "params": { 01:16:42.391 "trtype": "pcie", 01:16:42.391 "traddr": "0000:00:11.0", 01:16:42.391 "name": "Nvme1" 01:16:42.391 }, 01:16:42.391 "method": "bdev_nvme_attach_controller" 01:16:42.391 }, 01:16:42.391 { 01:16:42.391 "method": "bdev_wait_for_examine" 01:16:42.391 } 01:16:42.391 ] 01:16:42.391 } 01:16:42.391 ] 01:16:42.391 } 01:16:42.391 [2024-12-09 05:11:24.785260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:42.391 [2024-12-09 05:11:24.841235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:42.651 [2024-12-09 05:11:24.883523] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:16:42.651  [2024-12-09T05:11:25.366Z] Copying: 1024/1024 [kB] (average 333 MBps) 01:16:42.910 01:16:42.911 05:11:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 01:16:42.911 05:11:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 01:16:42.911 05:11:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 01:16:42.911 05:11:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 01:16:42.911 05:11:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 01:16:42.911 05:11:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 01:16:42.911 05:11:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 01:16:42.911 [2024-12-09 05:11:25.317777] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:16:42.911 [2024-12-09 05:11:25.317853] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61080 ] 01:16:42.911 { 01:16:42.911 "subsystems": [ 01:16:42.911 { 01:16:42.911 "subsystem": "bdev", 01:16:42.911 "config": [ 01:16:42.911 { 01:16:42.911 "params": { 01:16:42.911 "trtype": "pcie", 01:16:42.911 "traddr": "0000:00:10.0", 01:16:42.911 "name": "Nvme0" 01:16:42.911 }, 01:16:42.911 "method": "bdev_nvme_attach_controller" 01:16:42.911 }, 01:16:42.911 { 01:16:42.911 "params": { 01:16:42.911 "trtype": "pcie", 01:16:42.911 "traddr": "0000:00:11.0", 01:16:42.911 "name": "Nvme1" 01:16:42.911 }, 01:16:42.911 "method": "bdev_nvme_attach_controller" 01:16:42.911 }, 01:16:42.911 { 01:16:42.911 "method": "bdev_wait_for_examine" 01:16:42.911 } 01:16:42.911 ] 01:16:42.911 } 01:16:42.911 ] 01:16:42.911 } 01:16:43.169 [2024-12-09 05:11:25.462698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:43.169 [2024-12-09 05:11:25.518567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:43.169 [2024-12-09 05:11:25.561043] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:16:43.428  [2024-12-09T05:11:26.143Z] Copying: 65/65 [MB] (average 792 MBps) 01:16:43.687 01:16:43.687 05:11:26 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 01:16:43.687 05:11:26 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 01:16:43.687 05:11:26 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 01:16:43.687 05:11:26 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 01:16:43.687 [2024-12-09 05:11:26.126545] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:16:43.687 [2024-12-09 05:11:26.126662] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61100 ] 01:16:43.944 { 01:16:43.944 "subsystems": [ 01:16:43.944 { 01:16:43.944 "subsystem": "bdev", 01:16:43.944 "config": [ 01:16:43.944 { 01:16:43.944 "params": { 01:16:43.944 "trtype": "pcie", 01:16:43.944 "traddr": "0000:00:10.0", 01:16:43.944 "name": "Nvme0" 01:16:43.944 }, 01:16:43.944 "method": "bdev_nvme_attach_controller" 01:16:43.944 }, 01:16:43.944 { 01:16:43.944 "params": { 01:16:43.944 "trtype": "pcie", 01:16:43.944 "traddr": "0000:00:11.0", 01:16:43.944 "name": "Nvme1" 01:16:43.944 }, 01:16:43.944 "method": "bdev_nvme_attach_controller" 01:16:43.944 }, 01:16:43.944 { 01:16:43.944 "method": "bdev_wait_for_examine" 01:16:43.944 } 01:16:43.944 ] 01:16:43.944 } 01:16:43.944 ] 01:16:43.944 } 01:16:43.944 [2024-12-09 05:11:26.279443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:43.944 [2024-12-09 05:11:26.327603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:43.944 [2024-12-09 05:11:26.371049] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:16:44.207  [2024-12-09T05:11:26.925Z] Copying: 1024/1024 [kB] (average 1000 MBps) 01:16:44.469 01:16:44.469 05:11:26 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 01:16:44.469 05:11:26 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 01:16:44.469 ************************************ 01:16:44.469 END TEST dd_offset_magic 01:16:44.469 ************************************ 01:16:44.469 01:16:44.469 real 0m2.996s 01:16:44.469 user 0m2.230s 01:16:44.469 sys 0m0.825s 01:16:44.469 05:11:26 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1130 -- # xtrace_disable 01:16:44.469 05:11:26 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 01:16:44.469 05:11:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 01:16:44.469 05:11:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 01:16:44.469 05:11:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 01:16:44.469 05:11:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 01:16:44.469 05:11:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 01:16:44.469 05:11:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 01:16:44.469 05:11:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 01:16:44.469 05:11:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 01:16:44.469 05:11:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 01:16:44.469 05:11:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 01:16:44.469 05:11:26 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 01:16:44.469 [2024-12-09 05:11:26.856628] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:16:44.469 [2024-12-09 05:11:26.856701] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61136 ] 01:16:44.469 { 01:16:44.469 "subsystems": [ 01:16:44.469 { 01:16:44.469 "subsystem": "bdev", 01:16:44.469 "config": [ 01:16:44.469 { 01:16:44.469 "params": { 01:16:44.469 "trtype": "pcie", 01:16:44.469 "traddr": "0000:00:10.0", 01:16:44.469 "name": "Nvme0" 01:16:44.469 }, 01:16:44.469 "method": "bdev_nvme_attach_controller" 01:16:44.469 }, 01:16:44.469 { 01:16:44.469 "params": { 01:16:44.469 "trtype": "pcie", 01:16:44.469 "traddr": "0000:00:11.0", 01:16:44.469 "name": "Nvme1" 01:16:44.469 }, 01:16:44.469 "method": "bdev_nvme_attach_controller" 01:16:44.469 }, 01:16:44.469 { 01:16:44.469 "method": "bdev_wait_for_examine" 01:16:44.469 } 01:16:44.469 ] 01:16:44.469 } 01:16:44.469 ] 01:16:44.469 } 01:16:44.727 [2024-12-09 05:11:27.008261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:44.727 [2024-12-09 05:11:27.062593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:44.727 [2024-12-09 05:11:27.104784] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:16:44.986  [2024-12-09T05:11:27.764Z] Copying: 5120/5120 [kB] (average 1000 MBps) 01:16:45.308 01:16:45.308 05:11:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 01:16:45.308 05:11:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 01:16:45.308 05:11:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 01:16:45.308 05:11:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 01:16:45.308 05:11:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 01:16:45.308 05:11:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 01:16:45.308 05:11:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 01:16:45.308 05:11:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 01:16:45.308 05:11:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 01:16:45.308 05:11:27 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 01:16:45.308 { 01:16:45.308 "subsystems": [ 01:16:45.308 { 01:16:45.308 "subsystem": "bdev", 01:16:45.308 "config": [ 01:16:45.308 { 01:16:45.308 "params": { 01:16:45.308 "trtype": "pcie", 01:16:45.308 "traddr": "0000:00:10.0", 01:16:45.308 "name": "Nvme0" 01:16:45.308 }, 01:16:45.308 "method": "bdev_nvme_attach_controller" 01:16:45.308 }, 01:16:45.308 { 01:16:45.308 "params": { 01:16:45.308 "trtype": "pcie", 01:16:45.308 "traddr": "0000:00:11.0", 01:16:45.308 "name": "Nvme1" 01:16:45.308 }, 01:16:45.308 "method": "bdev_nvme_attach_controller" 01:16:45.308 }, 01:16:45.308 { 01:16:45.308 "method": "bdev_wait_for_examine" 01:16:45.308 } 01:16:45.308 ] 01:16:45.308 } 01:16:45.308 ] 01:16:45.308 } 01:16:45.308 [2024-12-09 05:11:27.541417] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:16:45.308 [2024-12-09 05:11:27.541494] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61147 ] 01:16:45.308 [2024-12-09 05:11:27.695005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:45.308 [2024-12-09 05:11:27.747672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:45.567 [2024-12-09 05:11:27.789515] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:16:45.567  [2024-12-09T05:11:28.282Z] Copying: 5120/5120 [kB] (average 714 MBps) 01:16:45.826 01:16:45.826 05:11:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 01:16:45.826 ************************************ 01:16:45.826 END TEST spdk_dd_bdev_to_bdev 01:16:45.826 ************************************ 01:16:45.826 01:16:45.826 real 0m6.738s 01:16:45.826 user 0m4.913s 01:16:45.826 sys 0m2.827s 01:16:45.826 05:11:28 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 01:16:45.826 05:11:28 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 01:16:45.826 05:11:28 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 01:16:45.826 05:11:28 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 01:16:45.826 05:11:28 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:16:45.826 05:11:28 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 01:16:45.826 05:11:28 spdk_dd -- common/autotest_common.sh@10 -- # set +x 01:16:45.826 ************************************ 01:16:45.826 START TEST spdk_dd_uring 01:16:45.826 ************************************ 01:16:45.826 05:11:28 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 01:16:46.085 * Looking for test storage... 01:16:46.085 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 01:16:46.085 05:11:28 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:16:46.085 05:11:28 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # lcov --version 01:16:46.085 05:11:28 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:16:46.085 05:11:28 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:16:46.085 05:11:28 spdk_dd.spdk_dd_uring -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:16:46.085 05:11:28 spdk_dd.spdk_dd_uring -- scripts/common.sh@333 -- # local ver1 ver1_l 01:16:46.085 05:11:28 spdk_dd.spdk_dd_uring -- scripts/common.sh@334 -- # local ver2 ver2_l 01:16:46.085 05:11:28 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # IFS=.-: 01:16:46.085 05:11:28 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # read -ra ver1 01:16:46.085 05:11:28 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # IFS=.-: 01:16:46.085 05:11:28 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # read -ra ver2 01:16:46.085 05:11:28 spdk_dd.spdk_dd_uring -- scripts/common.sh@338 -- # local 'op=<' 01:16:46.085 05:11:28 spdk_dd.spdk_dd_uring -- scripts/common.sh@340 -- # ver1_l=2 01:16:46.085 05:11:28 spdk_dd.spdk_dd_uring -- scripts/common.sh@341 -- # ver2_l=1 01:16:46.085 05:11:28 spdk_dd.spdk_dd_uring -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:16:46.085 05:11:28 spdk_dd.spdk_dd_uring -- scripts/common.sh@344 -- # case "$op" in 01:16:46.085 05:11:28 spdk_dd.spdk_dd_uring -- scripts/common.sh@345 -- # : 1 01:16:46.085 05:11:28 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v = 0 )) 01:16:46.085 05:11:28 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:16:46.085 05:11:28 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # decimal 1 01:16:46.085 05:11:28 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=1 01:16:46.085 05:11:28 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:16:46.086 05:11:28 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 1 01:16:46.086 05:11:28 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # ver1[v]=1 01:16:46.086 05:11:28 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # decimal 2 01:16:46.086 05:11:28 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=2 01:16:46.086 05:11:28 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:16:46.086 05:11:28 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 2 01:16:46.086 05:11:28 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # ver2[v]=2 01:16:46.086 05:11:28 spdk_dd.spdk_dd_uring -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:16:46.086 05:11:28 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:16:46.086 05:11:28 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # return 0 01:16:46.086 05:11:28 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:16:46.086 05:11:28 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:16:46.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:16:46.086 --rc genhtml_branch_coverage=1 01:16:46.086 --rc genhtml_function_coverage=1 01:16:46.086 --rc genhtml_legend=1 01:16:46.086 --rc geninfo_all_blocks=1 01:16:46.086 --rc geninfo_unexecuted_blocks=1 01:16:46.086 01:16:46.086 ' 01:16:46.086 05:11:28 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:16:46.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:16:46.086 --rc genhtml_branch_coverage=1 01:16:46.086 --rc genhtml_function_coverage=1 01:16:46.086 --rc genhtml_legend=1 01:16:46.086 --rc geninfo_all_blocks=1 01:16:46.086 --rc geninfo_unexecuted_blocks=1 01:16:46.086 01:16:46.086 ' 01:16:46.086 05:11:28 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:16:46.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:16:46.086 --rc genhtml_branch_coverage=1 01:16:46.086 --rc genhtml_function_coverage=1 01:16:46.086 --rc genhtml_legend=1 01:16:46.086 --rc geninfo_all_blocks=1 01:16:46.086 --rc geninfo_unexecuted_blocks=1 01:16:46.086 01:16:46.086 ' 01:16:46.086 05:11:28 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:16:46.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:16:46.086 --rc genhtml_branch_coverage=1 01:16:46.086 --rc genhtml_function_coverage=1 01:16:46.086 --rc genhtml_legend=1 01:16:46.086 --rc geninfo_all_blocks=1 01:16:46.086 --rc geninfo_unexecuted_blocks=1 01:16:46.086 01:16:46.086 ' 01:16:46.086 05:11:28 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:16:46.086 05:11:28 spdk_dd.spdk_dd_uring -- scripts/common.sh@15 -- # shopt -s extglob 01:16:46.086 05:11:28 spdk_dd.spdk_dd_uring -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:16:46.086 05:11:28 spdk_dd.spdk_dd_uring -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:16:46.086 05:11:28 spdk_dd.spdk_dd_uring -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:16:46.086 05:11:28 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:16:46.086 05:11:28 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:16:46.086 05:11:28 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:16:46.086 05:11:28 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 01:16:46.086 05:11:28 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:16:46.086 05:11:28 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 01:16:46.086 05:11:28 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:16:46.086 05:11:28 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1111 -- # xtrace_disable 01:16:46.086 05:11:28 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 01:16:46.086 ************************************ 01:16:46.086 START TEST dd_uring_copy 01:16:46.086 ************************************ 01:16:46.086 05:11:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1129 -- # uring_zram_copy 01:16:46.086 05:11:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 01:16:46.086 05:11:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 01:16:46.086 05:11:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 01:16:46.086 05:11:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 01:16:46.086 05:11:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 01:16:46.086 05:11:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 01:16:46.086 05:11:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 01:16:46.086 05:11:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 01:16:46.086 05:11:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 01:16:46.086 05:11:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 01:16:46.086 05:11:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 01:16:46.086 05:11:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 01:16:46.086 05:11:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 01:16:46.086 05:11:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 01:16:46.086 05:11:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 01:16:46.086 05:11:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 01:16:46.086 05:11:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 01:16:46.086 05:11:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 01:16:46.086 05:11:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 01:16:46.086 05:11:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 01:16:46.086 05:11:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 01:16:46.086 05:11:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 01:16:46.086 05:11:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 01:16:46.086 05:11:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 01:16:46.086 05:11:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 01:16:46.345 05:11:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=5qbw4t7yg0v3xu5geyh2mp2b17xmcurdxhdackye7g2dp1hwos2ra43jkt2md86xqsui8f99n7qrxa1ejysr3eiqruei0jfl76rgcj1uctaajal5e5h2tei202qqs0rcurkommwvsx63hduv338m22moe4hjwck286muu5i27nw8dsqc8fb6zhmh29hd6494chfnzksx6ou8n5l0t9bggbkpdt2ljt91xmzole2drklf0o7kynlvf3r96s41caervfwve4xuyp488lt8ulj96wetm8xt5oz8icc6pwh70hrkds2x7wp2grnatakr9eli88wi8q9me4ncqkothkd7441cgf5z0pqpqk73cvzbk2z9p27cd92pkcmzdlclic59qauv1krmpdt4g1lwmezwysrsfl0ijpgjwwumwuc4a4hkmmvts3ffzsryq45q0n7wn4n8u2o5n78uq5qn6js8sm0b3jkynn5w7jw1aibrwfozekxzn48bmawqevys9ww5ay9c0gdb0tr3evs6wmslan1z2ixe5tb04kic6cswg3puadw9g9lcddtalze9kcqfh3dknzcqts5gr51rctnqobqrjli8yuuq5s64x6heyld39ggp0ycg00375bralrrbertghlgyt2x84khtzus7qx1lj66l7dlumyyzhroljvp2fbnofgdji920zvv7rw1ku6cntkt4lmvfzelhc8hcfychairnrvjgepejszdwi4ojovmmii20mwvxdmr8d78mgb6ovli7m7r66dn5q8yb2o3gsdes2ayni67bchv85t56atgg3ftbub0its3txoine2f8utaxdfyw4ydirdlc89lm90zd0eddav2xu2v5p18ani4xgdsmiemdmka4u0r1luezza7qt3f4ajh8j68o8gb2ueh7c5lr69twrfrzs4d9lvltc6geo2vbkrkdksubc5k1bwc31b0ormkpw97ddef0jbomcz4voofwupcyeclzcfy85jbijwh2az51oxix 01:16:46.345 05:11:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo 5qbw4t7yg0v3xu5geyh2mp2b17xmcurdxhdackye7g2dp1hwos2ra43jkt2md86xqsui8f99n7qrxa1ejysr3eiqruei0jfl76rgcj1uctaajal5e5h2tei202qqs0rcurkommwvsx63hduv338m22moe4hjwck286muu5i27nw8dsqc8fb6zhmh29hd6494chfnzksx6ou8n5l0t9bggbkpdt2ljt91xmzole2drklf0o7kynlvf3r96s41caervfwve4xuyp488lt8ulj96wetm8xt5oz8icc6pwh70hrkds2x7wp2grnatakr9eli88wi8q9me4ncqkothkd7441cgf5z0pqpqk73cvzbk2z9p27cd92pkcmzdlclic59qauv1krmpdt4g1lwmezwysrsfl0ijpgjwwumwuc4a4hkmmvts3ffzsryq45q0n7wn4n8u2o5n78uq5qn6js8sm0b3jkynn5w7jw1aibrwfozekxzn48bmawqevys9ww5ay9c0gdb0tr3evs6wmslan1z2ixe5tb04kic6cswg3puadw9g9lcddtalze9kcqfh3dknzcqts5gr51rctnqobqrjli8yuuq5s64x6heyld39ggp0ycg00375bralrrbertghlgyt2x84khtzus7qx1lj66l7dlumyyzhroljvp2fbnofgdji920zvv7rw1ku6cntkt4lmvfzelhc8hcfychairnrvjgepejszdwi4ojovmmii20mwvxdmr8d78mgb6ovli7m7r66dn5q8yb2o3gsdes2ayni67bchv85t56atgg3ftbub0its3txoine2f8utaxdfyw4ydirdlc89lm90zd0eddav2xu2v5p18ani4xgdsmiemdmka4u0r1luezza7qt3f4ajh8j68o8gb2ueh7c5lr69twrfrzs4d9lvltc6geo2vbkrkdksubc5k1bwc31b0ormkpw97ddef0jbomcz4voofwupcyeclzcfy85jbijwh2az51oxix 01:16:46.345 05:11:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 01:16:46.345 [2024-12-09 05:11:28.603081] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:16:46.346 [2024-12-09 05:11:28.603134] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61225 ] 01:16:46.346 [2024-12-09 05:11:28.750089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:46.605 [2024-12-09 05:11:28.803605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:46.605 [2024-12-09 05:11:28.844431] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:16:47.188  [2024-12-09T05:11:29.958Z] Copying: 511/511 [MB] (average 1147 MBps) 01:16:47.502 01:16:47.502 05:11:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 01:16:47.502 05:11:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 01:16:47.502 05:11:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 01:16:47.502 05:11:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 01:16:47.502 { 01:16:47.502 "subsystems": [ 01:16:47.502 { 01:16:47.502 "subsystem": "bdev", 01:16:47.502 "config": [ 01:16:47.502 { 01:16:47.502 "params": { 01:16:47.502 "block_size": 512, 01:16:47.502 "num_blocks": 1048576, 01:16:47.502 "name": "malloc0" 01:16:47.502 }, 01:16:47.502 "method": "bdev_malloc_create" 01:16:47.502 }, 01:16:47.502 { 01:16:47.502 "params": { 01:16:47.502 "filename": "/dev/zram1", 01:16:47.502 "name": "uring0" 01:16:47.502 }, 01:16:47.502 "method": "bdev_uring_create" 01:16:47.502 }, 01:16:47.502 { 01:16:47.502 "method": "bdev_wait_for_examine" 01:16:47.502 } 01:16:47.502 ] 01:16:47.502 } 01:16:47.502 ] 01:16:47.502 } 01:16:47.502 [2024-12-09 05:11:29.882764] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:16:47.502 [2024-12-09 05:11:29.882831] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61246 ] 01:16:47.761 [2024-12-09 05:11:30.035113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:47.761 [2024-12-09 05:11:30.088780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:47.761 [2024-12-09 05:11:30.130545] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:16:49.137  [2024-12-09T05:11:32.160Z] Copying: 294/512 [MB] (294 MBps) [2024-12-09T05:11:32.727Z] Copying: 512/512 [MB] (average 287 MBps) 01:16:50.271 01:16:50.271 05:11:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 01:16:50.271 05:11:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 01:16:50.271 05:11:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 01:16:50.271 05:11:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 01:16:50.271 [2024-12-09 05:11:32.473404] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:16:50.271 [2024-12-09 05:11:32.473469] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61285 ] 01:16:50.271 { 01:16:50.271 "subsystems": [ 01:16:50.271 { 01:16:50.271 "subsystem": "bdev", 01:16:50.271 "config": [ 01:16:50.271 { 01:16:50.271 "params": { 01:16:50.271 "block_size": 512, 01:16:50.271 "num_blocks": 1048576, 01:16:50.271 "name": "malloc0" 01:16:50.271 }, 01:16:50.271 "method": "bdev_malloc_create" 01:16:50.271 }, 01:16:50.271 { 01:16:50.271 "params": { 01:16:50.271 "filename": "/dev/zram1", 01:16:50.271 "name": "uring0" 01:16:50.271 }, 01:16:50.271 "method": "bdev_uring_create" 01:16:50.271 }, 01:16:50.271 { 01:16:50.271 "method": "bdev_wait_for_examine" 01:16:50.271 } 01:16:50.271 ] 01:16:50.271 } 01:16:50.271 ] 01:16:50.271 } 01:16:50.271 [2024-12-09 05:11:32.641961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:50.271 [2024-12-09 05:11:32.691410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:50.530 [2024-12-09 05:11:32.733147] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:16:51.466  [2024-12-09T05:11:35.298Z] Copying: 212/512 [MB] (212 MBps) [2024-12-09T05:11:35.558Z] Copying: 412/512 [MB] (200 MBps) [2024-12-09T05:11:36.123Z] Copying: 512/512 [MB] (average 198 MBps) 01:16:53.667 01:16:53.667 05:11:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 01:16:53.667 05:11:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ 5qbw4t7yg0v3xu5geyh2mp2b17xmcurdxhdackye7g2dp1hwos2ra43jkt2md86xqsui8f99n7qrxa1ejysr3eiqruei0jfl76rgcj1uctaajal5e5h2tei202qqs0rcurkommwvsx63hduv338m22moe4hjwck286muu5i27nw8dsqc8fb6zhmh29hd6494chfnzksx6ou8n5l0t9bggbkpdt2ljt91xmzole2drklf0o7kynlvf3r96s41caervfwve4xuyp488lt8ulj96wetm8xt5oz8icc6pwh70hrkds2x7wp2grnatakr9eli88wi8q9me4ncqkothkd7441cgf5z0pqpqk73cvzbk2z9p27cd92pkcmzdlclic59qauv1krmpdt4g1lwmezwysrsfl0ijpgjwwumwuc4a4hkmmvts3ffzsryq45q0n7wn4n8u2o5n78uq5qn6js8sm0b3jkynn5w7jw1aibrwfozekxzn48bmawqevys9ww5ay9c0gdb0tr3evs6wmslan1z2ixe5tb04kic6cswg3puadw9g9lcddtalze9kcqfh3dknzcqts5gr51rctnqobqrjli8yuuq5s64x6heyld39ggp0ycg00375bralrrbertghlgyt2x84khtzus7qx1lj66l7dlumyyzhroljvp2fbnofgdji920zvv7rw1ku6cntkt4lmvfzelhc8hcfychairnrvjgepejszdwi4ojovmmii20mwvxdmr8d78mgb6ovli7m7r66dn5q8yb2o3gsdes2ayni67bchv85t56atgg3ftbub0its3txoine2f8utaxdfyw4ydirdlc89lm90zd0eddav2xu2v5p18ani4xgdsmiemdmka4u0r1luezza7qt3f4ajh8j68o8gb2ueh7c5lr69twrfrzs4d9lvltc6geo2vbkrkdksubc5k1bwc31b0ormkpw97ddef0jbomcz4voofwupcyeclzcfy85jbijwh2az51oxix == \5\q\b\w\4\t\7\y\g\0\v\3\x\u\5\g\e\y\h\2\m\p\2\b\1\7\x\m\c\u\r\d\x\h\d\a\c\k\y\e\7\g\2\d\p\1\h\w\o\s\2\r\a\4\3\j\k\t\2\m\d\8\6\x\q\s\u\i\8\f\9\9\n\7\q\r\x\a\1\e\j\y\s\r\3\e\i\q\r\u\e\i\0\j\f\l\7\6\r\g\c\j\1\u\c\t\a\a\j\a\l\5\e\5\h\2\t\e\i\2\0\2\q\q\s\0\r\c\u\r\k\o\m\m\w\v\s\x\6\3\h\d\u\v\3\3\8\m\2\2\m\o\e\4\h\j\w\c\k\2\8\6\m\u\u\5\i\2\7\n\w\8\d\s\q\c\8\f\b\6\z\h\m\h\2\9\h\d\6\4\9\4\c\h\f\n\z\k\s\x\6\o\u\8\n\5\l\0\t\9\b\g\g\b\k\p\d\t\2\l\j\t\9\1\x\m\z\o\l\e\2\d\r\k\l\f\0\o\7\k\y\n\l\v\f\3\r\9\6\s\4\1\c\a\e\r\v\f\w\v\e\4\x\u\y\p\4\8\8\l\t\8\u\l\j\9\6\w\e\t\m\8\x\t\5\o\z\8\i\c\c\6\p\w\h\7\0\h\r\k\d\s\2\x\7\w\p\2\g\r\n\a\t\a\k\r\9\e\l\i\8\8\w\i\8\q\9\m\e\4\n\c\q\k\o\t\h\k\d\7\4\4\1\c\g\f\5\z\0\p\q\p\q\k\7\3\c\v\z\b\k\2\z\9\p\2\7\c\d\9\2\p\k\c\m\z\d\l\c\l\i\c\5\9\q\a\u\v\1\k\r\m\p\d\t\4\g\1\l\w\m\e\z\w\y\s\r\s\f\l\0\i\j\p\g\j\w\w\u\m\w\u\c\4\a\4\h\k\m\m\v\t\s\3\f\f\z\s\r\y\q\4\5\q\0\n\7\w\n\4\n\8\u\2\o\5\n\7\8\u\q\5\q\n\6\j\s\8\s\m\0\b\3\j\k\y\n\n\5\w\7\j\w\1\a\i\b\r\w\f\o\z\e\k\x\z\n\4\8\b\m\a\w\q\e\v\y\s\9\w\w\5\a\y\9\c\0\g\d\b\0\t\r\3\e\v\s\6\w\m\s\l\a\n\1\z\2\i\x\e\5\t\b\0\4\k\i\c\6\c\s\w\g\3\p\u\a\d\w\9\g\9\l\c\d\d\t\a\l\z\e\9\k\c\q\f\h\3\d\k\n\z\c\q\t\s\5\g\r\5\1\r\c\t\n\q\o\b\q\r\j\l\i\8\y\u\u\q\5\s\6\4\x\6\h\e\y\l\d\3\9\g\g\p\0\y\c\g\0\0\3\7\5\b\r\a\l\r\r\b\e\r\t\g\h\l\g\y\t\2\x\8\4\k\h\t\z\u\s\7\q\x\1\l\j\6\6\l\7\d\l\u\m\y\y\z\h\r\o\l\j\v\p\2\f\b\n\o\f\g\d\j\i\9\2\0\z\v\v\7\r\w\1\k\u\6\c\n\t\k\t\4\l\m\v\f\z\e\l\h\c\8\h\c\f\y\c\h\a\i\r\n\r\v\j\g\e\p\e\j\s\z\d\w\i\4\o\j\o\v\m\m\i\i\2\0\m\w\v\x\d\m\r\8\d\7\8\m\g\b\6\o\v\l\i\7\m\7\r\6\6\d\n\5\q\8\y\b\2\o\3\g\s\d\e\s\2\a\y\n\i\6\7\b\c\h\v\8\5\t\5\6\a\t\g\g\3\f\t\b\u\b\0\i\t\s\3\t\x\o\i\n\e\2\f\8\u\t\a\x\d\f\y\w\4\y\d\i\r\d\l\c\8\9\l\m\9\0\z\d\0\e\d\d\a\v\2\x\u\2\v\5\p\1\8\a\n\i\4\x\g\d\s\m\i\e\m\d\m\k\a\4\u\0\r\1\l\u\e\z\z\a\7\q\t\3\f\4\a\j\h\8\j\6\8\o\8\g\b\2\u\e\h\7\c\5\l\r\6\9\t\w\r\f\r\z\s\4\d\9\l\v\l\t\c\6\g\e\o\2\v\b\k\r\k\d\k\s\u\b\c\5\k\1\b\w\c\3\1\b\0\o\r\m\k\p\w\9\7\d\d\e\f\0\j\b\o\m\c\z\4\v\o\o\f\w\u\p\c\y\e\c\l\z\c\f\y\8\5\j\b\i\j\w\h\2\a\z\5\1\o\x\i\x ]] 01:16:53.667 05:11:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 01:16:53.668 05:11:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ 5qbw4t7yg0v3xu5geyh2mp2b17xmcurdxhdackye7g2dp1hwos2ra43jkt2md86xqsui8f99n7qrxa1ejysr3eiqruei0jfl76rgcj1uctaajal5e5h2tei202qqs0rcurkommwvsx63hduv338m22moe4hjwck286muu5i27nw8dsqc8fb6zhmh29hd6494chfnzksx6ou8n5l0t9bggbkpdt2ljt91xmzole2drklf0o7kynlvf3r96s41caervfwve4xuyp488lt8ulj96wetm8xt5oz8icc6pwh70hrkds2x7wp2grnatakr9eli88wi8q9me4ncqkothkd7441cgf5z0pqpqk73cvzbk2z9p27cd92pkcmzdlclic59qauv1krmpdt4g1lwmezwysrsfl0ijpgjwwumwuc4a4hkmmvts3ffzsryq45q0n7wn4n8u2o5n78uq5qn6js8sm0b3jkynn5w7jw1aibrwfozekxzn48bmawqevys9ww5ay9c0gdb0tr3evs6wmslan1z2ixe5tb04kic6cswg3puadw9g9lcddtalze9kcqfh3dknzcqts5gr51rctnqobqrjli8yuuq5s64x6heyld39ggp0ycg00375bralrrbertghlgyt2x84khtzus7qx1lj66l7dlumyyzhroljvp2fbnofgdji920zvv7rw1ku6cntkt4lmvfzelhc8hcfychairnrvjgepejszdwi4ojovmmii20mwvxdmr8d78mgb6ovli7m7r66dn5q8yb2o3gsdes2ayni67bchv85t56atgg3ftbub0its3txoine2f8utaxdfyw4ydirdlc89lm90zd0eddav2xu2v5p18ani4xgdsmiemdmka4u0r1luezza7qt3f4ajh8j68o8gb2ueh7c5lr69twrfrzs4d9lvltc6geo2vbkrkdksubc5k1bwc31b0ormkpw97ddef0jbomcz4voofwupcyeclzcfy85jbijwh2az51oxix == \5\q\b\w\4\t\7\y\g\0\v\3\x\u\5\g\e\y\h\2\m\p\2\b\1\7\x\m\c\u\r\d\x\h\d\a\c\k\y\e\7\g\2\d\p\1\h\w\o\s\2\r\a\4\3\j\k\t\2\m\d\8\6\x\q\s\u\i\8\f\9\9\n\7\q\r\x\a\1\e\j\y\s\r\3\e\i\q\r\u\e\i\0\j\f\l\7\6\r\g\c\j\1\u\c\t\a\a\j\a\l\5\e\5\h\2\t\e\i\2\0\2\q\q\s\0\r\c\u\r\k\o\m\m\w\v\s\x\6\3\h\d\u\v\3\3\8\m\2\2\m\o\e\4\h\j\w\c\k\2\8\6\m\u\u\5\i\2\7\n\w\8\d\s\q\c\8\f\b\6\z\h\m\h\2\9\h\d\6\4\9\4\c\h\f\n\z\k\s\x\6\o\u\8\n\5\l\0\t\9\b\g\g\b\k\p\d\t\2\l\j\t\9\1\x\m\z\o\l\e\2\d\r\k\l\f\0\o\7\k\y\n\l\v\f\3\r\9\6\s\4\1\c\a\e\r\v\f\w\v\e\4\x\u\y\p\4\8\8\l\t\8\u\l\j\9\6\w\e\t\m\8\x\t\5\o\z\8\i\c\c\6\p\w\h\7\0\h\r\k\d\s\2\x\7\w\p\2\g\r\n\a\t\a\k\r\9\e\l\i\8\8\w\i\8\q\9\m\e\4\n\c\q\k\o\t\h\k\d\7\4\4\1\c\g\f\5\z\0\p\q\p\q\k\7\3\c\v\z\b\k\2\z\9\p\2\7\c\d\9\2\p\k\c\m\z\d\l\c\l\i\c\5\9\q\a\u\v\1\k\r\m\p\d\t\4\g\1\l\w\m\e\z\w\y\s\r\s\f\l\0\i\j\p\g\j\w\w\u\m\w\u\c\4\a\4\h\k\m\m\v\t\s\3\f\f\z\s\r\y\q\4\5\q\0\n\7\w\n\4\n\8\u\2\o\5\n\7\8\u\q\5\q\n\6\j\s\8\s\m\0\b\3\j\k\y\n\n\5\w\7\j\w\1\a\i\b\r\w\f\o\z\e\k\x\z\n\4\8\b\m\a\w\q\e\v\y\s\9\w\w\5\a\y\9\c\0\g\d\b\0\t\r\3\e\v\s\6\w\m\s\l\a\n\1\z\2\i\x\e\5\t\b\0\4\k\i\c\6\c\s\w\g\3\p\u\a\d\w\9\g\9\l\c\d\d\t\a\l\z\e\9\k\c\q\f\h\3\d\k\n\z\c\q\t\s\5\g\r\5\1\r\c\t\n\q\o\b\q\r\j\l\i\8\y\u\u\q\5\s\6\4\x\6\h\e\y\l\d\3\9\g\g\p\0\y\c\g\0\0\3\7\5\b\r\a\l\r\r\b\e\r\t\g\h\l\g\y\t\2\x\8\4\k\h\t\z\u\s\7\q\x\1\l\j\6\6\l\7\d\l\u\m\y\y\z\h\r\o\l\j\v\p\2\f\b\n\o\f\g\d\j\i\9\2\0\z\v\v\7\r\w\1\k\u\6\c\n\t\k\t\4\l\m\v\f\z\e\l\h\c\8\h\c\f\y\c\h\a\i\r\n\r\v\j\g\e\p\e\j\s\z\d\w\i\4\o\j\o\v\m\m\i\i\2\0\m\w\v\x\d\m\r\8\d\7\8\m\g\b\6\o\v\l\i\7\m\7\r\6\6\d\n\5\q\8\y\b\2\o\3\g\s\d\e\s\2\a\y\n\i\6\7\b\c\h\v\8\5\t\5\6\a\t\g\g\3\f\t\b\u\b\0\i\t\s\3\t\x\o\i\n\e\2\f\8\u\t\a\x\d\f\y\w\4\y\d\i\r\d\l\c\8\9\l\m\9\0\z\d\0\e\d\d\a\v\2\x\u\2\v\5\p\1\8\a\n\i\4\x\g\d\s\m\i\e\m\d\m\k\a\4\u\0\r\1\l\u\e\z\z\a\7\q\t\3\f\4\a\j\h\8\j\6\8\o\8\g\b\2\u\e\h\7\c\5\l\r\6\9\t\w\r\f\r\z\s\4\d\9\l\v\l\t\c\6\g\e\o\2\v\b\k\r\k\d\k\s\u\b\c\5\k\1\b\w\c\3\1\b\0\o\r\m\k\p\w\9\7\d\d\e\f\0\j\b\o\m\c\z\4\v\o\o\f\w\u\p\c\y\e\c\l\z\c\f\y\8\5\j\b\i\j\w\h\2\a\z\5\1\o\x\i\x ]] 01:16:53.668 05:11:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 01:16:53.668 05:11:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 01:16:53.668 05:11:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 01:16:53.668 05:11:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 01:16:53.668 05:11:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 01:16:53.924 [2024-12-09 05:11:36.146903] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:16:53.924 [2024-12-09 05:11:36.147027] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61358 ] 01:16:53.924 { 01:16:53.924 "subsystems": [ 01:16:53.924 { 01:16:53.924 "subsystem": "bdev", 01:16:53.924 "config": [ 01:16:53.924 { 01:16:53.924 "params": { 01:16:53.924 "block_size": 512, 01:16:53.924 "num_blocks": 1048576, 01:16:53.924 "name": "malloc0" 01:16:53.924 }, 01:16:53.924 "method": "bdev_malloc_create" 01:16:53.924 }, 01:16:53.924 { 01:16:53.924 "params": { 01:16:53.924 "filename": "/dev/zram1", 01:16:53.924 "name": "uring0" 01:16:53.924 }, 01:16:53.924 "method": "bdev_uring_create" 01:16:53.924 }, 01:16:53.924 { 01:16:53.924 "method": "bdev_wait_for_examine" 01:16:53.924 } 01:16:53.924 ] 01:16:53.924 } 01:16:53.924 ] 01:16:53.924 } 01:16:53.924 [2024-12-09 05:11:36.299303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:53.924 [2024-12-09 05:11:36.351603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:54.181 [2024-12-09 05:11:36.393764] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:16:55.114  [2024-12-09T05:11:38.942Z] Copying: 199/512 [MB] (199 MBps) [2024-12-09T05:11:39.200Z] Copying: 399/512 [MB] (199 MBps) [2024-12-09T05:11:39.458Z] Copying: 512/512 [MB] (average 200 MBps) 01:16:57.002 01:16:57.261 05:11:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 01:16:57.261 05:11:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 01:16:57.261 05:11:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 01:16:57.261 05:11:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 01:16:57.261 05:11:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 01:16:57.261 05:11:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 01:16:57.261 05:11:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 01:16:57.261 05:11:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 01:16:57.261 [2024-12-09 05:11:39.522275] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:16:57.261 [2024-12-09 05:11:39.522436] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61404 ] 01:16:57.261 { 01:16:57.261 "subsystems": [ 01:16:57.261 { 01:16:57.261 "subsystem": "bdev", 01:16:57.261 "config": [ 01:16:57.261 { 01:16:57.261 "params": { 01:16:57.261 "block_size": 512, 01:16:57.261 "num_blocks": 1048576, 01:16:57.261 "name": "malloc0" 01:16:57.261 }, 01:16:57.261 "method": "bdev_malloc_create" 01:16:57.261 }, 01:16:57.261 { 01:16:57.261 "params": { 01:16:57.261 "filename": "/dev/zram1", 01:16:57.261 "name": "uring0" 01:16:57.261 }, 01:16:57.261 "method": "bdev_uring_create" 01:16:57.261 }, 01:16:57.261 { 01:16:57.261 "params": { 01:16:57.261 "name": "uring0" 01:16:57.261 }, 01:16:57.261 "method": "bdev_uring_delete" 01:16:57.261 }, 01:16:57.261 { 01:16:57.261 "method": "bdev_wait_for_examine" 01:16:57.261 } 01:16:57.261 ] 01:16:57.261 } 01:16:57.261 ] 01:16:57.261 } 01:16:57.261 [2024-12-09 05:11:39.673360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:57.520 [2024-12-09 05:11:39.725453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:57.520 [2024-12-09 05:11:39.767429] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:16:57.520  [2024-12-09T05:11:40.543Z] Copying: 0/0 [B] (average 0 Bps) 01:16:58.087 01:16:58.087 05:11:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 01:16:58.087 05:11:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 01:16:58.087 05:11:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 01:16:58.087 05:11:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 01:16:58.087 05:11:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 01:16:58.087 05:11:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # local es=0 01:16:58.087 05:11:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 01:16:58.087 05:11:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:16:58.087 05:11:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:16:58.087 05:11:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:16:58.087 05:11:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:16:58.087 05:11:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:16:58.087 05:11:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:16:58.087 05:11:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:16:58.087 05:11:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 01:16:58.087 05:11:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 01:16:58.087 [2024-12-09 05:11:40.357071] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:16:58.087 [2024-12-09 05:11:40.357156] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61428 ] 01:16:58.087 { 01:16:58.087 "subsystems": [ 01:16:58.087 { 01:16:58.087 "subsystem": "bdev", 01:16:58.087 "config": [ 01:16:58.087 { 01:16:58.087 "params": { 01:16:58.087 "block_size": 512, 01:16:58.087 "num_blocks": 1048576, 01:16:58.087 "name": "malloc0" 01:16:58.087 }, 01:16:58.087 "method": "bdev_malloc_create" 01:16:58.087 }, 01:16:58.087 { 01:16:58.087 "params": { 01:16:58.087 "filename": "/dev/zram1", 01:16:58.087 "name": "uring0" 01:16:58.087 }, 01:16:58.087 "method": "bdev_uring_create" 01:16:58.087 }, 01:16:58.087 { 01:16:58.087 "params": { 01:16:58.087 "name": "uring0" 01:16:58.087 }, 01:16:58.087 "method": "bdev_uring_delete" 01:16:58.087 }, 01:16:58.087 { 01:16:58.087 "method": "bdev_wait_for_examine" 01:16:58.087 } 01:16:58.087 ] 01:16:58.087 } 01:16:58.087 ] 01:16:58.087 } 01:16:58.087 [2024-12-09 05:11:40.511623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:58.345 [2024-12-09 05:11:40.568635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:58.345 [2024-12-09 05:11:40.610657] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:16:58.345 [2024-12-09 05:11:40.797834] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 01:16:58.604 [2024-12-09 05:11:40.797978] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 01:16:58.604 [2024-12-09 05:11:40.797991] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 01:16:58.604 [2024-12-09 05:11:40.798000] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:16:58.604 [2024-12-09 05:11:41.052187] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 01:16:58.863 05:11:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # es=237 01:16:58.863 05:11:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:16:58.863 05:11:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@664 -- # es=109 01:16:58.863 05:11:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@665 -- # case "$es" in 01:16:58.863 05:11:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@672 -- # es=1 01:16:58.863 05:11:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:16:58.863 05:11:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 01:16:58.863 05:11:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 01:16:58.863 05:11:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 01:16:58.863 05:11:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 01:16:58.863 05:11:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 01:16:58.863 05:11:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 01:16:59.121 01:16:59.121 real 0m12.884s 01:16:59.121 user 0m8.823s 01:16:59.121 sys 0m10.827s 01:16:59.121 ************************************ 01:16:59.121 END TEST dd_uring_copy 01:16:59.121 ************************************ 01:16:59.121 05:11:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 01:16:59.121 05:11:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 01:16:59.121 ************************************ 01:16:59.121 END TEST spdk_dd_uring 01:16:59.121 ************************************ 01:16:59.121 01:16:59.121 real 0m13.180s 01:16:59.121 user 0m8.972s 01:16:59.121 sys 0m10.982s 01:16:59.121 05:11:41 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1130 -- # xtrace_disable 01:16:59.121 05:11:41 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 01:16:59.121 05:11:41 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 01:16:59.121 05:11:41 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:16:59.121 05:11:41 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 01:16:59.121 05:11:41 spdk_dd -- common/autotest_common.sh@10 -- # set +x 01:16:59.121 ************************************ 01:16:59.121 START TEST spdk_dd_sparse 01:16:59.121 ************************************ 01:16:59.121 05:11:41 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 01:16:59.380 * Looking for test storage... 01:16:59.380 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 01:16:59.380 05:11:41 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:16:59.380 05:11:41 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # lcov --version 01:16:59.380 05:11:41 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:16:59.380 05:11:41 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:16:59.380 05:11:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:16:59.380 05:11:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l 01:16:59.380 05:11:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l 01:16:59.380 05:11:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-: 01:16:59.380 05:11:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1 01:16:59.380 05:11:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-: 01:16:59.380 05:11:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2 01:16:59.380 05:11:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<' 01:16:59.380 05:11:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2 01:16:59.380 05:11:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1 01:16:59.380 05:11:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:16:59.380 05:11:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in 01:16:59.380 05:11:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1 01:16:59.380 05:11:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 )) 01:16:59.380 05:11:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:16:59.380 05:11:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1 01:16:59.380 05:11:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1 01:16:59.380 05:11:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:16:59.380 05:11:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1 01:16:59.380 05:11:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1 01:16:59.380 05:11:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2 01:16:59.380 05:11:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2 01:16:59.380 05:11:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:16:59.380 05:11:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2 01:16:59.380 05:11:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2 01:16:59.380 05:11:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:16:59.380 05:11:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:16:59.380 05:11:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0 01:16:59.380 05:11:41 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:16:59.380 05:11:41 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:16:59.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:16:59.380 --rc genhtml_branch_coverage=1 01:16:59.380 --rc genhtml_function_coverage=1 01:16:59.380 --rc genhtml_legend=1 01:16:59.380 --rc geninfo_all_blocks=1 01:16:59.380 --rc geninfo_unexecuted_blocks=1 01:16:59.380 01:16:59.380 ' 01:16:59.380 05:11:41 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:16:59.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:16:59.380 --rc genhtml_branch_coverage=1 01:16:59.380 --rc genhtml_function_coverage=1 01:16:59.380 --rc genhtml_legend=1 01:16:59.380 --rc geninfo_all_blocks=1 01:16:59.380 --rc geninfo_unexecuted_blocks=1 01:16:59.380 01:16:59.380 ' 01:16:59.380 05:11:41 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:16:59.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:16:59.380 --rc genhtml_branch_coverage=1 01:16:59.380 --rc genhtml_function_coverage=1 01:16:59.380 --rc genhtml_legend=1 01:16:59.380 --rc geninfo_all_blocks=1 01:16:59.380 --rc geninfo_unexecuted_blocks=1 01:16:59.380 01:16:59.380 ' 01:16:59.380 05:11:41 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:16:59.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:16:59.380 --rc genhtml_branch_coverage=1 01:16:59.380 --rc genhtml_function_coverage=1 01:16:59.380 --rc genhtml_legend=1 01:16:59.380 --rc geninfo_all_blocks=1 01:16:59.381 --rc geninfo_unexecuted_blocks=1 01:16:59.381 01:16:59.381 ' 01:16:59.381 05:11:41 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:16:59.381 05:11:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob 01:16:59.381 05:11:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:16:59.381 05:11:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:16:59.381 05:11:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:16:59.381 05:11:41 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:16:59.381 05:11:41 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:16:59.381 05:11:41 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:16:59.381 05:11:41 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 01:16:59.381 05:11:41 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:16:59.381 05:11:41 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 01:16:59.381 05:11:41 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 01:16:59.381 05:11:41 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 01:16:59.381 05:11:41 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 01:16:59.381 05:11:41 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 01:16:59.381 05:11:41 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 01:16:59.381 05:11:41 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 01:16:59.381 05:11:41 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 01:16:59.381 05:11:41 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 01:16:59.381 05:11:41 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 01:16:59.381 05:11:41 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 01:16:59.381 1+0 records in 01:16:59.381 1+0 records out 01:16:59.381 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0111122 s, 377 MB/s 01:16:59.381 05:11:41 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 01:16:59.381 1+0 records in 01:16:59.381 1+0 records out 01:16:59.381 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0119283 s, 352 MB/s 01:16:59.381 05:11:41 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 01:16:59.381 1+0 records in 01:16:59.381 1+0 records out 01:16:59.381 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0103445 s, 405 MB/s 01:16:59.381 05:11:41 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 01:16:59.381 05:11:41 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:16:59.381 05:11:41 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 01:16:59.381 05:11:41 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 01:16:59.381 ************************************ 01:16:59.381 START TEST dd_sparse_file_to_file 01:16:59.381 ************************************ 01:16:59.381 05:11:41 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1129 -- # file_to_file 01:16:59.381 05:11:41 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 01:16:59.381 05:11:41 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 01:16:59.381 05:11:41 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 01:16:59.381 05:11:41 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 01:16:59.381 05:11:41 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 01:16:59.381 05:11:41 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 01:16:59.381 05:11:41 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 01:16:59.381 05:11:41 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 01:16:59.381 05:11:41 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 01:16:59.381 05:11:41 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 01:16:59.639 [2024-12-09 05:11:41.871575] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:16:59.639 [2024-12-09 05:11:41.871699] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61529 ] 01:16:59.639 { 01:16:59.639 "subsystems": [ 01:16:59.639 { 01:16:59.639 "subsystem": "bdev", 01:16:59.639 "config": [ 01:16:59.639 { 01:16:59.639 "params": { 01:16:59.639 "block_size": 4096, 01:16:59.639 "filename": "dd_sparse_aio_disk", 01:16:59.639 "name": "dd_aio" 01:16:59.639 }, 01:16:59.639 "method": "bdev_aio_create" 01:16:59.639 }, 01:16:59.639 { 01:16:59.639 "params": { 01:16:59.639 "lvs_name": "dd_lvstore", 01:16:59.639 "bdev_name": "dd_aio" 01:16:59.639 }, 01:16:59.639 "method": "bdev_lvol_create_lvstore" 01:16:59.639 }, 01:16:59.639 { 01:16:59.639 "method": "bdev_wait_for_examine" 01:16:59.639 } 01:16:59.639 ] 01:16:59.639 } 01:16:59.639 ] 01:16:59.639 } 01:16:59.639 [2024-12-09 05:11:42.026103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:16:59.639 [2024-12-09 05:11:42.078158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:16:59.898 [2024-12-09 05:11:42.120112] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:16:59.898  [2024-12-09T05:11:42.615Z] Copying: 12/36 [MB] (average 750 MBps) 01:17:00.159 01:17:00.159 05:11:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 01:17:00.159 05:11:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 01:17:00.159 05:11:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 01:17:00.159 05:11:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 01:17:00.159 05:11:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 01:17:00.159 05:11:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 01:17:00.159 05:11:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 01:17:00.159 05:11:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 01:17:00.159 05:11:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 01:17:00.159 05:11:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 01:17:00.159 01:17:00.159 real 0m0.696s 01:17:00.159 user 0m0.442s 01:17:00.159 sys 0m0.348s 01:17:00.159 ************************************ 01:17:00.159 END TEST dd_sparse_file_to_file 01:17:00.159 ************************************ 01:17:00.159 05:11:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 01:17:00.159 05:11:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 01:17:00.159 05:11:42 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 01:17:00.159 05:11:42 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:17:00.159 05:11:42 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 01:17:00.159 05:11:42 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 01:17:00.159 ************************************ 01:17:00.159 START TEST dd_sparse_file_to_bdev 01:17:00.159 ************************************ 01:17:00.159 05:11:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1129 -- # file_to_bdev 01:17:00.159 05:11:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 01:17:00.159 05:11:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 01:17:00.159 05:11:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 01:17:00.159 05:11:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 01:17:00.159 05:11:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 01:17:00.159 05:11:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 01:17:00.159 05:11:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 01:17:00.159 05:11:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 01:17:00.417 [2024-12-09 05:11:42.633637] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:17:00.417 [2024-12-09 05:11:42.633773] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61577 ] 01:17:00.417 { 01:17:00.417 "subsystems": [ 01:17:00.417 { 01:17:00.417 "subsystem": "bdev", 01:17:00.417 "config": [ 01:17:00.417 { 01:17:00.417 "params": { 01:17:00.417 "block_size": 4096, 01:17:00.417 "filename": "dd_sparse_aio_disk", 01:17:00.417 "name": "dd_aio" 01:17:00.417 }, 01:17:00.417 "method": "bdev_aio_create" 01:17:00.417 }, 01:17:00.417 { 01:17:00.417 "params": { 01:17:00.417 "lvs_name": "dd_lvstore", 01:17:00.417 "lvol_name": "dd_lvol", 01:17:00.417 "size_in_mib": 36, 01:17:00.417 "thin_provision": true 01:17:00.417 }, 01:17:00.417 "method": "bdev_lvol_create" 01:17:00.417 }, 01:17:00.417 { 01:17:00.417 "method": "bdev_wait_for_examine" 01:17:00.417 } 01:17:00.417 ] 01:17:00.417 } 01:17:00.417 ] 01:17:00.417 } 01:17:00.417 [2024-12-09 05:11:42.785924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:17:00.417 [2024-12-09 05:11:42.838128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:17:00.676 [2024-12-09 05:11:42.879921] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:17:00.676  [2024-12-09T05:11:43.391Z] Copying: 12/36 [MB] (average 461 MBps) 01:17:00.935 01:17:00.935 01:17:00.935 real 0m0.608s 01:17:00.935 user 0m0.403s 01:17:00.935 sys 0m0.301s 01:17:00.935 05:11:43 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 01:17:00.935 05:11:43 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 01:17:00.935 ************************************ 01:17:00.935 END TEST dd_sparse_file_to_bdev 01:17:00.935 ************************************ 01:17:00.935 05:11:43 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 01:17:00.936 05:11:43 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:17:00.936 05:11:43 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 01:17:00.936 05:11:43 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 01:17:00.936 ************************************ 01:17:00.936 START TEST dd_sparse_bdev_to_file 01:17:00.936 ************************************ 01:17:00.936 05:11:43 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1129 -- # bdev_to_file 01:17:00.936 05:11:43 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 01:17:00.936 05:11:43 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 01:17:00.936 05:11:43 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 01:17:00.936 05:11:43 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 01:17:00.936 05:11:43 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 01:17:00.936 05:11:43 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 01:17:00.936 05:11:43 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 01:17:00.936 05:11:43 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 01:17:00.936 [2024-12-09 05:11:43.308267] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:17:00.936 [2024-12-09 05:11:43.308349] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61605 ] 01:17:00.936 { 01:17:00.936 "subsystems": [ 01:17:00.936 { 01:17:00.936 "subsystem": "bdev", 01:17:00.936 "config": [ 01:17:00.936 { 01:17:00.936 "params": { 01:17:00.936 "block_size": 4096, 01:17:00.936 "filename": "dd_sparse_aio_disk", 01:17:00.936 "name": "dd_aio" 01:17:00.936 }, 01:17:00.936 "method": "bdev_aio_create" 01:17:00.936 }, 01:17:00.936 { 01:17:00.936 "method": "bdev_wait_for_examine" 01:17:00.936 } 01:17:00.936 ] 01:17:00.936 } 01:17:00.936 ] 01:17:00.936 } 01:17:01.195 [2024-12-09 05:11:43.460539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:17:01.196 [2024-12-09 05:11:43.512806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:17:01.196 [2024-12-09 05:11:43.557957] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:17:01.196  [2024-12-09T05:11:43.911Z] Copying: 12/36 [MB] (average 750 MBps) 01:17:01.455 01:17:01.455 05:11:43 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 01:17:01.455 05:11:43 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 01:17:01.455 05:11:43 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 01:17:01.455 05:11:43 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 01:17:01.455 05:11:43 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 01:17:01.455 05:11:43 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 01:17:01.455 05:11:43 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 01:17:01.455 05:11:43 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 01:17:01.455 05:11:43 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 01:17:01.455 05:11:43 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 01:17:01.455 01:17:01.455 real 0m0.642s 01:17:01.455 user 0m0.416s 01:17:01.455 sys 0m0.316s 01:17:01.455 05:11:43 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 01:17:01.455 05:11:43 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 01:17:01.455 ************************************ 01:17:01.455 END TEST dd_sparse_bdev_to_file 01:17:01.455 ************************************ 01:17:01.715 05:11:43 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 01:17:01.715 05:11:43 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 01:17:01.715 05:11:43 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 01:17:01.715 05:11:43 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 01:17:01.715 05:11:43 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 01:17:01.715 ************************************ 01:17:01.715 END TEST spdk_dd_sparse 01:17:01.715 ************************************ 01:17:01.715 01:17:01.715 real 0m2.472s 01:17:01.715 user 0m1.470s 01:17:01.715 sys 0m1.300s 01:17:01.715 05:11:43 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1130 -- # xtrace_disable 01:17:01.715 05:11:43 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 01:17:01.715 05:11:44 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 01:17:01.715 05:11:44 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:17:01.715 05:11:44 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 01:17:01.715 05:11:44 spdk_dd -- common/autotest_common.sh@10 -- # set +x 01:17:01.715 ************************************ 01:17:01.715 START TEST spdk_dd_negative 01:17:01.715 ************************************ 01:17:01.715 05:11:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 01:17:01.715 * Looking for test storage... 01:17:01.715 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 01:17:01.715 05:11:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:17:01.976 05:11:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # lcov --version 01:17:01.976 05:11:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:17:01.976 05:11:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:17:01.976 05:11:44 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:17:01.976 05:11:44 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l 01:17:01.976 05:11:44 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l 01:17:01.976 05:11:44 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-: 01:17:01.976 05:11:44 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1 01:17:01.976 05:11:44 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-: 01:17:01.976 05:11:44 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2 01:17:01.976 05:11:44 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<' 01:17:01.976 05:11:44 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2 01:17:01.976 05:11:44 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1 01:17:01.976 05:11:44 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:17:01.976 05:11:44 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in 01:17:01.976 05:11:44 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1 01:17:01.976 05:11:44 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 )) 01:17:01.976 05:11:44 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:17:01.976 05:11:44 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1 01:17:01.976 05:11:44 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1 01:17:01.976 05:11:44 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:17:01.976 05:11:44 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1 01:17:01.976 05:11:44 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1 01:17:01.976 05:11:44 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2 01:17:01.976 05:11:44 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2 01:17:01.976 05:11:44 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:17:01.976 05:11:44 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2 01:17:01.976 05:11:44 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2 01:17:01.976 05:11:44 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:17:01.976 05:11:44 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:17:01.976 05:11:44 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0 01:17:01.976 05:11:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:17:01.976 05:11:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:17:01.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:17:01.976 --rc genhtml_branch_coverage=1 01:17:01.976 --rc genhtml_function_coverage=1 01:17:01.976 --rc genhtml_legend=1 01:17:01.976 --rc geninfo_all_blocks=1 01:17:01.976 --rc geninfo_unexecuted_blocks=1 01:17:01.976 01:17:01.976 ' 01:17:01.976 05:11:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:17:01.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:17:01.976 --rc genhtml_branch_coverage=1 01:17:01.976 --rc genhtml_function_coverage=1 01:17:01.976 --rc genhtml_legend=1 01:17:01.976 --rc geninfo_all_blocks=1 01:17:01.976 --rc geninfo_unexecuted_blocks=1 01:17:01.976 01:17:01.976 ' 01:17:01.976 05:11:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:17:01.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:17:01.976 --rc genhtml_branch_coverage=1 01:17:01.976 --rc genhtml_function_coverage=1 01:17:01.976 --rc genhtml_legend=1 01:17:01.976 --rc geninfo_all_blocks=1 01:17:01.976 --rc geninfo_unexecuted_blocks=1 01:17:01.976 01:17:01.976 ' 01:17:01.976 05:11:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:17:01.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:17:01.976 --rc genhtml_branch_coverage=1 01:17:01.976 --rc genhtml_function_coverage=1 01:17:01.976 --rc genhtml_legend=1 01:17:01.976 --rc geninfo_all_blocks=1 01:17:01.976 --rc geninfo_unexecuted_blocks=1 01:17:01.976 01:17:01.976 ' 01:17:01.976 05:11:44 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:17:01.976 05:11:44 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob 01:17:01.976 05:11:44 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:17:01.976 05:11:44 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:17:01.976 05:11:44 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:17:01.976 05:11:44 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:17:01.977 05:11:44 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:17:01.977 05:11:44 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:17:01.977 05:11:44 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 01:17:01.977 05:11:44 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:17:01.977 05:11:44 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 01:17:01.977 05:11:44 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:17:01.977 05:11:44 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 01:17:01.977 05:11:44 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 01:17:01.977 05:11:44 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments 01:17:01.977 05:11:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:17:01.977 05:11:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 01:17:01.977 05:11:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 01:17:01.977 ************************************ 01:17:01.977 START TEST dd_invalid_arguments 01:17:01.977 ************************************ 01:17:01.977 05:11:44 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1129 -- # invalid_arguments 01:17:01.977 05:11:44 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 01:17:01.977 05:11:44 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # local es=0 01:17:01.977 05:11:44 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 01:17:01.977 05:11:44 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:17:01.977 05:11:44 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:17:01.977 05:11:44 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:17:01.977 05:11:44 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:17:01.977 05:11:44 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:17:01.977 05:11:44 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:17:01.977 05:11:44 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:17:01.977 05:11:44 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 01:17:01.977 05:11:44 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 01:17:01.977 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 01:17:01.977 01:17:01.977 CPU options: 01:17:01.977 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 01:17:01.977 (like [0,1,10]) 01:17:01.977 --lcores lcore to CPU mapping list. The list is in the format: 01:17:01.977 [<,lcores[@CPUs]>...] 01:17:01.977 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 01:17:01.977 Within the group, '-' is used for range separator, 01:17:01.977 ',' is used for single number separator. 01:17:01.977 '( )' can be omitted for single element group, 01:17:01.977 '@' can be omitted if cpus and lcores have the same value 01:17:01.977 --disable-cpumask-locks Disable CPU core lock files. 01:17:01.977 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 01:17:01.977 pollers in the app support interrupt mode) 01:17:01.977 -p, --main-core main (primary) core for DPDK 01:17:01.977 01:17:01.977 Configuration options: 01:17:01.977 -c, --config, --json JSON config file 01:17:01.977 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 01:17:01.977 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 01:17:01.977 --wait-for-rpc wait for RPCs to initialize subsystems 01:17:01.977 --rpcs-allowed comma-separated list of permitted RPCS 01:17:01.977 --json-ignore-init-errors don't exit on invalid config entry 01:17:01.977 01:17:01.977 Memory options: 01:17:01.977 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 01:17:01.977 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 01:17:01.977 --huge-dir use a specific hugetlbfs mount to reserve memory from 01:17:01.977 -R, --huge-unlink unlink huge files after initialization 01:17:01.977 -n, --mem-channels number of memory channels used for DPDK 01:17:01.977 -s, --mem-size memory size in MB for DPDK (default: 0MB) 01:17:01.977 --msg-mempool-size global message memory pool size in count (default: 262143) 01:17:01.977 --no-huge run without using hugepages 01:17:01.977 --enforce-numa enforce NUMA allocations from the specified NUMA node 01:17:01.977 -i, --shm-id shared memory ID (optional) 01:17:01.977 -g, --single-file-segments force creating just one hugetlbfs file 01:17:01.977 01:17:01.977 PCI options: 01:17:01.977 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 01:17:01.977 -B, --pci-blocked pci addr to block (can be used more than once) 01:17:01.977 -u, --no-pci disable PCI access 01:17:01.977 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 01:17:01.977 01:17:01.977 Log options: 01:17:01.977 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 01:17:01.977 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 01:17:01.977 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 01:17:01.977 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 01:17:01.977 blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, gpt_parse, idxd, ioat, 01:17:01.977 iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 01:17:01.977 nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 01:17:01.977 sock_posix, spdk_aio_mgr_io, thread, trace, uring, vbdev_delay, 01:17:01.977 vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, 01:17:01.977 vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, 01:17:01.977 virtio_pci, virtio_user, virtio_vfio_user, vmd) 01:17:01.977 --silence-noticelog disable notice level logging to stderr 01:17:01.977 01:17:01.977 Trace options: 01:17:01.977 --num-trace-entries number of trace entries for each core, must be power of 2, 01:17:01.977 setting 0 to disable trace (default 32768) 01:17:01.977 Tracepoints vary in size and can use more than one trace entry. 01:17:01.977 -e, --tpoint-group [:] 01:17:01.977 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 01:17:01.977 [2024-12-09 05:11:44.366506] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 01:17:01.977 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 01:17:01.977 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, blob, 01:17:01.977 bdev_raid, scheduler, all). 01:17:01.977 tpoint_mask - tracepoint mask for enabling individual tpoints inside 01:17:01.977 a tracepoint group. First tpoint inside a group can be enabled by 01:17:01.977 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 01:17:01.977 combined (e.g. thread,bdev:0x1). All available tpoints can be found 01:17:01.977 in /include/spdk_internal/trace_defs.h 01:17:01.977 01:17:01.977 Other options: 01:17:01.977 -h, --help show this usage 01:17:01.977 -v, --version print SPDK version 01:17:01.977 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 01:17:01.977 --env-context Opaque context for use of the env implementation 01:17:01.977 01:17:01.977 Application specific: 01:17:01.977 [--------- DD Options ---------] 01:17:01.977 --if Input file. Must specify either --if or --ib. 01:17:01.977 --ib Input bdev. Must specifier either --if or --ib 01:17:01.977 --of Output file. Must specify either --of or --ob. 01:17:01.977 --ob Output bdev. Must specify either --of or --ob. 01:17:01.977 --iflag Input file flags. 01:17:01.977 --oflag Output file flags. 01:17:01.977 --bs I/O unit size (default: 4096) 01:17:01.977 --qd Queue depth (default: 2) 01:17:01.977 --count I/O unit count. The number of I/O units to copy. (default: all) 01:17:01.977 --skip Skip this many I/O units at start of input. (default: 0) 01:17:01.977 --seek Skip this many I/O units at start of output. (default: 0) 01:17:01.977 --aio Force usage of AIO. (by default io_uring is used if available) 01:17:01.977 --sparse Enable hole skipping in input target 01:17:01.977 Available iflag and oflag values: 01:17:01.977 append - append mode 01:17:01.977 direct - use direct I/O for data 01:17:01.977 directory - fail unless a directory 01:17:01.977 dsync - use synchronized I/O for data 01:17:01.977 noatime - do not update access time 01:17:01.977 noctty - do not assign controlling terminal from file 01:17:01.977 nofollow - do not follow symlinks 01:17:01.977 nonblock - use non-blocking I/O 01:17:01.977 sync - use synchronized I/O for data and metadata 01:17:01.977 05:11:44 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # es=2 01:17:01.978 05:11:44 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:17:01.978 05:11:44 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:17:01.978 ************************************ 01:17:01.978 END TEST dd_invalid_arguments 01:17:01.978 ************************************ 01:17:01.978 05:11:44 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:17:01.978 01:17:01.978 real 0m0.074s 01:17:01.978 user 0m0.039s 01:17:01.978 sys 0m0.033s 01:17:01.978 05:11:44 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1130 -- # xtrace_disable 01:17:01.978 05:11:44 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 01:17:02.237 05:11:44 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input 01:17:02.237 05:11:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:17:02.237 05:11:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 01:17:02.237 05:11:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 01:17:02.237 ************************************ 01:17:02.237 START TEST dd_double_input 01:17:02.237 ************************************ 01:17:02.237 05:11:44 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1129 -- # double_input 01:17:02.237 05:11:44 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 01:17:02.237 05:11:44 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # local es=0 01:17:02.237 05:11:44 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 01:17:02.237 05:11:44 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:17:02.237 05:11:44 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:17:02.237 05:11:44 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:17:02.237 05:11:44 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:17:02.237 05:11:44 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:17:02.237 05:11:44 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:17:02.237 05:11:44 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:17:02.237 05:11:44 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 01:17:02.237 05:11:44 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 01:17:02.237 [2024-12-09 05:11:44.502911] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 01:17:02.237 05:11:44 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # es=22 01:17:02.237 05:11:44 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:17:02.237 05:11:44 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:17:02.237 ************************************ 01:17:02.237 END TEST dd_double_input 01:17:02.237 ************************************ 01:17:02.237 05:11:44 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:17:02.237 01:17:02.237 real 0m0.074s 01:17:02.237 user 0m0.037s 01:17:02.237 sys 0m0.035s 01:17:02.237 05:11:44 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1130 -- # xtrace_disable 01:17:02.237 05:11:44 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 01:17:02.237 05:11:44 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output 01:17:02.237 05:11:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:17:02.237 05:11:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 01:17:02.237 05:11:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 01:17:02.237 ************************************ 01:17:02.237 START TEST dd_double_output 01:17:02.238 ************************************ 01:17:02.238 05:11:44 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1129 -- # double_output 01:17:02.238 05:11:44 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 01:17:02.238 05:11:44 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # local es=0 01:17:02.238 05:11:44 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 01:17:02.238 05:11:44 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:17:02.238 05:11:44 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:17:02.238 05:11:44 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:17:02.238 05:11:44 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:17:02.238 05:11:44 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:17:02.238 05:11:44 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:17:02.238 05:11:44 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:17:02.238 05:11:44 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 01:17:02.238 05:11:44 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 01:17:02.238 [2024-12-09 05:11:44.641498] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 01:17:02.238 05:11:44 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # es=22 01:17:02.238 ************************************ 01:17:02.238 END TEST dd_double_output 01:17:02.238 ************************************ 01:17:02.238 05:11:44 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:17:02.238 05:11:44 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:17:02.238 05:11:44 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:17:02.238 01:17:02.238 real 0m0.076s 01:17:02.238 user 0m0.042s 01:17:02.238 sys 0m0.033s 01:17:02.238 05:11:44 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1130 -- # xtrace_disable 01:17:02.238 05:11:44 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 01:17:02.496 05:11:44 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input 01:17:02.497 05:11:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:17:02.497 05:11:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 01:17:02.497 05:11:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 01:17:02.497 ************************************ 01:17:02.497 START TEST dd_no_input 01:17:02.497 ************************************ 01:17:02.497 05:11:44 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1129 -- # no_input 01:17:02.497 05:11:44 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 01:17:02.497 05:11:44 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # local es=0 01:17:02.497 05:11:44 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 01:17:02.497 05:11:44 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:17:02.497 05:11:44 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:17:02.497 05:11:44 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:17:02.497 05:11:44 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:17:02.497 05:11:44 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:17:02.497 05:11:44 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:17:02.497 05:11:44 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:17:02.497 05:11:44 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 01:17:02.497 05:11:44 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 01:17:02.497 [2024-12-09 05:11:44.775486] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 01:17:02.497 05:11:44 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # es=22 01:17:02.497 05:11:44 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:17:02.497 05:11:44 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:17:02.497 05:11:44 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:17:02.497 01:17:02.497 real 0m0.072s 01:17:02.497 user 0m0.042s 01:17:02.497 sys 0m0.028s 01:17:02.497 05:11:44 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1130 -- # xtrace_disable 01:17:02.497 05:11:44 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 01:17:02.497 ************************************ 01:17:02.497 END TEST dd_no_input 01:17:02.497 ************************************ 01:17:02.497 05:11:44 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output 01:17:02.497 05:11:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:17:02.497 05:11:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 01:17:02.497 05:11:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 01:17:02.497 ************************************ 01:17:02.497 START TEST dd_no_output 01:17:02.497 ************************************ 01:17:02.497 05:11:44 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1129 -- # no_output 01:17:02.497 05:11:44 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 01:17:02.497 05:11:44 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # local es=0 01:17:02.497 05:11:44 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 01:17:02.497 05:11:44 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:17:02.497 05:11:44 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:17:02.497 05:11:44 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:17:02.497 05:11:44 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:17:02.497 05:11:44 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:17:02.497 05:11:44 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:17:02.497 05:11:44 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:17:02.497 05:11:44 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 01:17:02.497 05:11:44 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 01:17:02.497 [2024-12-09 05:11:44.916988] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 01:17:02.497 05:11:44 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # es=22 01:17:02.497 05:11:44 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:17:02.497 05:11:44 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:17:02.497 05:11:44 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:17:02.497 01:17:02.497 real 0m0.076s 01:17:02.497 user 0m0.041s 01:17:02.497 sys 0m0.033s 01:17:02.497 ************************************ 01:17:02.497 END TEST dd_no_output 01:17:02.497 ************************************ 01:17:02.497 05:11:44 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1130 -- # xtrace_disable 01:17:02.497 05:11:44 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 01:17:02.764 05:11:44 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize 01:17:02.764 05:11:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:17:02.764 05:11:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 01:17:02.764 05:11:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 01:17:02.764 ************************************ 01:17:02.764 START TEST dd_wrong_blocksize 01:17:02.764 ************************************ 01:17:02.764 05:11:44 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1129 -- # wrong_blocksize 01:17:02.764 05:11:44 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 01:17:02.764 05:11:44 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # local es=0 01:17:02.764 05:11:44 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 01:17:02.764 05:11:44 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:17:02.764 05:11:44 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:17:02.764 05:11:44 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:17:02.764 05:11:44 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:17:02.764 05:11:45 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:17:02.764 05:11:44 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:17:02.764 05:11:45 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:17:02.764 05:11:45 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 01:17:02.764 05:11:45 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 01:17:02.764 [2024-12-09 05:11:45.058057] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 01:17:02.764 05:11:45 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # es=22 01:17:02.764 05:11:45 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:17:02.764 05:11:45 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:17:02.764 ************************************ 01:17:02.764 END TEST dd_wrong_blocksize 01:17:02.764 ************************************ 01:17:02.764 05:11:45 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:17:02.764 01:17:02.764 real 0m0.076s 01:17:02.764 user 0m0.051s 01:17:02.764 sys 0m0.024s 01:17:02.764 05:11:45 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 01:17:02.764 05:11:45 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 01:17:02.764 05:11:45 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize 01:17:02.764 05:11:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:17:02.764 05:11:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 01:17:02.764 05:11:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 01:17:02.764 ************************************ 01:17:02.764 START TEST dd_smaller_blocksize 01:17:02.764 ************************************ 01:17:02.764 05:11:45 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1129 -- # smaller_blocksize 01:17:02.764 05:11:45 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 01:17:02.765 05:11:45 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # local es=0 01:17:02.765 05:11:45 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 01:17:02.765 05:11:45 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:17:02.765 05:11:45 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:17:02.765 05:11:45 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:17:02.765 05:11:45 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:17:02.765 05:11:45 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:17:02.765 05:11:45 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:17:02.765 05:11:45 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:17:02.765 05:11:45 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 01:17:02.765 05:11:45 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 01:17:02.765 [2024-12-09 05:11:45.212974] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:17:02.765 [2024-12-09 05:11:45.213049] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61836 ] 01:17:03.026 [2024-12-09 05:11:45.361990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:17:03.026 [2024-12-09 05:11:45.415388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:17:03.026 [2024-12-09 05:11:45.456477] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:17:03.282 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 01:17:03.539 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 01:17:03.539 [2024-12-09 05:11:45.933883] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 01:17:03.539 [2024-12-09 05:11:45.934007] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:17:03.796 [2024-12-09 05:11:46.030109] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 01:17:03.796 05:11:46 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # es=244 01:17:03.796 05:11:46 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:17:03.796 05:11:46 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@664 -- # es=116 01:17:03.796 ************************************ 01:17:03.796 END TEST dd_smaller_blocksize 01:17:03.796 ************************************ 01:17:03.796 05:11:46 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@665 -- # case "$es" in 01:17:03.796 05:11:46 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@672 -- # es=1 01:17:03.796 05:11:46 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:17:03.796 01:17:03.796 real 0m0.994s 01:17:03.796 user 0m0.375s 01:17:03.796 sys 0m0.510s 01:17:03.796 05:11:46 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 01:17:03.796 05:11:46 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 01:17:03.796 05:11:46 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count 01:17:03.796 05:11:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:17:03.796 05:11:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 01:17:03.796 05:11:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 01:17:03.796 ************************************ 01:17:03.796 START TEST dd_invalid_count 01:17:03.796 ************************************ 01:17:03.796 05:11:46 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1129 -- # invalid_count 01:17:03.796 05:11:46 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 01:17:03.796 05:11:46 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # local es=0 01:17:03.796 05:11:46 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 01:17:03.796 05:11:46 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:17:03.796 05:11:46 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:17:03.796 05:11:46 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:17:03.796 05:11:46 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:17:03.796 05:11:46 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:17:03.797 05:11:46 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:17:03.797 05:11:46 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:17:03.797 05:11:46 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 01:17:03.797 05:11:46 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 01:17:04.054 [2024-12-09 05:11:46.251555] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 01:17:04.054 05:11:46 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # es=22 01:17:04.054 05:11:46 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:17:04.054 05:11:46 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:17:04.054 05:11:46 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:17:04.054 01:17:04.054 real 0m0.070s 01:17:04.054 user 0m0.043s 01:17:04.054 sys 0m0.025s 01:17:04.054 ************************************ 01:17:04.054 END TEST dd_invalid_count 01:17:04.054 ************************************ 01:17:04.054 05:11:46 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1130 -- # xtrace_disable 01:17:04.054 05:11:46 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 01:17:04.054 05:11:46 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag 01:17:04.054 05:11:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:17:04.054 05:11:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 01:17:04.054 05:11:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 01:17:04.054 ************************************ 01:17:04.054 START TEST dd_invalid_oflag 01:17:04.054 ************************************ 01:17:04.054 05:11:46 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1129 -- # invalid_oflag 01:17:04.054 05:11:46 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 01:17:04.054 05:11:46 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # local es=0 01:17:04.054 05:11:46 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 01:17:04.054 05:11:46 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:17:04.054 05:11:46 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:17:04.054 05:11:46 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:17:04.054 05:11:46 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:17:04.054 05:11:46 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:17:04.054 05:11:46 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:17:04.054 05:11:46 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:17:04.054 05:11:46 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 01:17:04.054 05:11:46 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 01:17:04.054 [2024-12-09 05:11:46.383556] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 01:17:04.054 05:11:46 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # es=22 01:17:04.054 05:11:46 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:17:04.054 05:11:46 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:17:04.054 05:11:46 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:17:04.054 01:17:04.054 real 0m0.072s 01:17:04.054 user 0m0.042s 01:17:04.054 sys 0m0.028s 01:17:04.054 05:11:46 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1130 -- # xtrace_disable 01:17:04.054 05:11:46 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 01:17:04.054 ************************************ 01:17:04.054 END TEST dd_invalid_oflag 01:17:04.054 ************************************ 01:17:04.054 05:11:46 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag 01:17:04.054 05:11:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:17:04.054 05:11:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 01:17:04.054 05:11:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 01:17:04.054 ************************************ 01:17:04.054 START TEST dd_invalid_iflag 01:17:04.054 ************************************ 01:17:04.054 05:11:46 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1129 -- # invalid_iflag 01:17:04.054 05:11:46 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 01:17:04.054 05:11:46 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # local es=0 01:17:04.054 05:11:46 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 01:17:04.054 05:11:46 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:17:04.054 05:11:46 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:17:04.054 05:11:46 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:17:04.054 05:11:46 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:17:04.054 05:11:46 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:17:04.054 05:11:46 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:17:04.054 05:11:46 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:17:04.054 05:11:46 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 01:17:04.054 05:11:46 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 01:17:04.311 [2024-12-09 05:11:46.521307] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 01:17:04.312 05:11:46 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # es=22 01:17:04.312 ************************************ 01:17:04.312 05:11:46 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:17:04.312 05:11:46 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:17:04.312 05:11:46 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:17:04.312 01:17:04.312 real 0m0.077s 01:17:04.312 user 0m0.047s 01:17:04.312 sys 0m0.028s 01:17:04.312 05:11:46 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1130 -- # xtrace_disable 01:17:04.312 05:11:46 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 01:17:04.312 END TEST dd_invalid_iflag 01:17:04.312 ************************************ 01:17:04.312 05:11:46 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag 01:17:04.312 05:11:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:17:04.312 05:11:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 01:17:04.312 05:11:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 01:17:04.312 ************************************ 01:17:04.312 START TEST dd_unknown_flag 01:17:04.312 ************************************ 01:17:04.312 05:11:46 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1129 -- # unknown_flag 01:17:04.312 05:11:46 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 01:17:04.312 05:11:46 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # local es=0 01:17:04.312 05:11:46 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 01:17:04.312 05:11:46 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:17:04.312 05:11:46 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:17:04.312 05:11:46 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:17:04.312 05:11:46 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:17:04.312 05:11:46 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:17:04.312 05:11:46 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:17:04.312 05:11:46 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:17:04.312 05:11:46 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 01:17:04.312 05:11:46 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 01:17:04.312 [2024-12-09 05:11:46.667298] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:17:04.312 [2024-12-09 05:11:46.667405] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61934 ] 01:17:04.569 [2024-12-09 05:11:46.821566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:17:04.569 [2024-12-09 05:11:46.875732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:17:04.569 [2024-12-09 05:11:46.917583] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:17:04.569 [2024-12-09 05:11:46.948091] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 01:17:04.569 [2024-12-09 05:11:46.948213] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:17:04.569 [2024-12-09 05:11:46.948286] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 01:17:04.569 [2024-12-09 05:11:46.948314] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:17:04.569 [2024-12-09 05:11:46.948544] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 01:17:04.569 [2024-12-09 05:11:46.948594] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:17:04.569 [2024-12-09 05:11:46.948685] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 01:17:04.569 [2024-12-09 05:11:46.948716] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 01:17:04.826 [2024-12-09 05:11:47.043675] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 01:17:04.826 05:11:47 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # es=234 01:17:04.826 05:11:47 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:17:04.826 05:11:47 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@664 -- # es=106 01:17:04.826 05:11:47 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@665 -- # case "$es" in 01:17:04.826 05:11:47 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@672 -- # es=1 01:17:04.826 ************************************ 01:17:04.826 END TEST dd_unknown_flag 01:17:04.826 ************************************ 01:17:04.826 05:11:47 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:17:04.826 01:17:04.826 real 0m0.538s 01:17:04.826 user 0m0.315s 01:17:04.826 sys 0m0.127s 01:17:04.826 05:11:47 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1130 -- # xtrace_disable 01:17:04.826 05:11:47 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 01:17:04.826 05:11:47 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json 01:17:04.826 05:11:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:17:04.826 05:11:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 01:17:04.826 05:11:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 01:17:04.826 ************************************ 01:17:04.826 START TEST dd_invalid_json 01:17:04.826 ************************************ 01:17:04.826 05:11:47 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1129 -- # invalid_json 01:17:04.826 05:11:47 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 01:17:04.826 05:11:47 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # : 01:17:04.826 05:11:47 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # local es=0 01:17:04.826 05:11:47 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 01:17:04.826 05:11:47 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:17:04.826 05:11:47 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:17:04.826 05:11:47 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:17:04.826 05:11:47 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:17:04.826 05:11:47 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:17:04.826 05:11:47 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:17:04.826 05:11:47 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:17:04.826 05:11:47 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 01:17:04.826 05:11:47 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 01:17:04.826 [2024-12-09 05:11:47.271535] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:17:04.826 [2024-12-09 05:11:47.271677] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61962 ] 01:17:05.083 [2024-12-09 05:11:47.417384] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:17:05.083 [2024-12-09 05:11:47.465067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:17:05.083 [2024-12-09 05:11:47.465127] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 01:17:05.083 [2024-12-09 05:11:47.465137] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 01:17:05.083 [2024-12-09 05:11:47.465143] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:17:05.083 [2024-12-09 05:11:47.465172] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 01:17:05.341 05:11:47 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # es=234 01:17:05.341 05:11:47 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:17:05.341 05:11:47 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@664 -- # es=106 01:17:05.341 ************************************ 01:17:05.341 END TEST dd_invalid_json 01:17:05.341 ************************************ 01:17:05.341 05:11:47 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@665 -- # case "$es" in 01:17:05.341 05:11:47 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@672 -- # es=1 01:17:05.341 05:11:47 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:17:05.341 01:17:05.341 real 0m0.351s 01:17:05.341 user 0m0.187s 01:17:05.341 sys 0m0.062s 01:17:05.342 05:11:47 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1130 -- # xtrace_disable 01:17:05.342 05:11:47 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 01:17:05.342 05:11:47 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek 01:17:05.342 05:11:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:17:05.342 05:11:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 01:17:05.342 05:11:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 01:17:05.342 ************************************ 01:17:05.342 START TEST dd_invalid_seek 01:17:05.342 ************************************ 01:17:05.342 05:11:47 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1129 -- # invalid_seek 01:17:05.342 05:11:47 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 01:17:05.342 05:11:47 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 01:17:05.342 05:11:47 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0 01:17:05.342 05:11:47 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 01:17:05.342 05:11:47 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 01:17:05.342 05:11:47 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1 01:17:05.342 05:11:47 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 01:17:05.342 05:11:47 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf 01:17:05.342 05:11:47 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # local es=0 01:17:05.342 05:11:47 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable 01:17:05.342 05:11:47 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 01:17:05.342 05:11:47 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:17:05.342 05:11:47 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 01:17:05.342 05:11:47 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:17:05.342 05:11:47 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:17:05.342 05:11:47 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:17:05.342 05:11:47 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:17:05.342 05:11:47 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:17:05.342 05:11:47 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:17:05.342 05:11:47 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 01:17:05.342 05:11:47 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 01:17:05.342 [2024-12-09 05:11:47.691308] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:17:05.342 [2024-12-09 05:11:47.691441] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61992 ] 01:17:05.342 { 01:17:05.342 "subsystems": [ 01:17:05.342 { 01:17:05.342 "subsystem": "bdev", 01:17:05.342 "config": [ 01:17:05.342 { 01:17:05.342 "params": { 01:17:05.342 "block_size": 512, 01:17:05.342 "num_blocks": 512, 01:17:05.342 "name": "malloc0" 01:17:05.342 }, 01:17:05.342 "method": "bdev_malloc_create" 01:17:05.342 }, 01:17:05.342 { 01:17:05.342 "params": { 01:17:05.342 "block_size": 512, 01:17:05.342 "num_blocks": 512, 01:17:05.342 "name": "malloc1" 01:17:05.342 }, 01:17:05.342 "method": "bdev_malloc_create" 01:17:05.342 }, 01:17:05.342 { 01:17:05.342 "method": "bdev_wait_for_examine" 01:17:05.342 } 01:17:05.342 ] 01:17:05.342 } 01:17:05.342 ] 01:17:05.342 } 01:17:05.598 [2024-12-09 05:11:47.843682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:17:05.598 [2024-12-09 05:11:47.895031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:17:05.598 [2024-12-09 05:11:47.937293] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:17:05.598 [2024-12-09 05:11:47.994312] spdk_dd.c:1145:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output 01:17:05.598 [2024-12-09 05:11:47.994380] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:17:05.856 [2024-12-09 05:11:48.092925] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 01:17:05.856 05:11:48 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # es=228 01:17:05.856 05:11:48 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:17:05.856 05:11:48 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@664 -- # es=100 01:17:05.856 05:11:48 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@665 -- # case "$es" in 01:17:05.856 05:11:48 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@672 -- # es=1 01:17:05.856 05:11:48 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:17:05.856 01:17:05.856 real 0m0.569s 01:17:05.856 user 0m0.376s 01:17:05.856 sys 0m0.159s 01:17:05.856 05:11:48 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1130 -- # xtrace_disable 01:17:05.856 05:11:48 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 01:17:05.856 ************************************ 01:17:05.856 END TEST dd_invalid_seek 01:17:05.856 ************************************ 01:17:05.856 05:11:48 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip 01:17:05.856 05:11:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:17:05.856 05:11:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 01:17:05.856 05:11:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 01:17:05.856 ************************************ 01:17:05.856 START TEST dd_invalid_skip 01:17:05.856 ************************************ 01:17:05.856 05:11:48 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1129 -- # invalid_skip 01:17:05.856 05:11:48 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 01:17:05.856 05:11:48 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 01:17:05.856 05:11:48 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0 01:17:05.856 05:11:48 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 01:17:05.856 05:11:48 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 01:17:05.856 05:11:48 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1 01:17:05.856 05:11:48 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 01:17:05.856 05:11:48 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf 01:17:05.856 05:11:48 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # local es=0 01:17:05.856 05:11:48 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 01:17:05.856 05:11:48 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable 01:17:05.856 05:11:48 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:17:05.856 05:11:48 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 01:17:05.856 05:11:48 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:17:05.856 05:11:48 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:17:05.856 05:11:48 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:17:05.856 05:11:48 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:17:05.856 05:11:48 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:17:05.856 05:11:48 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:17:05.856 05:11:48 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 01:17:05.856 05:11:48 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 01:17:06.113 { 01:17:06.113 "subsystems": [ 01:17:06.113 { 01:17:06.113 "subsystem": "bdev", 01:17:06.113 "config": [ 01:17:06.113 { 01:17:06.113 "params": { 01:17:06.113 "block_size": 512, 01:17:06.113 "num_blocks": 512, 01:17:06.113 "name": "malloc0" 01:17:06.113 }, 01:17:06.113 "method": "bdev_malloc_create" 01:17:06.113 }, 01:17:06.114 { 01:17:06.114 "params": { 01:17:06.114 "block_size": 512, 01:17:06.114 "num_blocks": 512, 01:17:06.114 "name": "malloc1" 01:17:06.114 }, 01:17:06.114 "method": "bdev_malloc_create" 01:17:06.114 }, 01:17:06.114 { 01:17:06.114 "method": "bdev_wait_for_examine" 01:17:06.114 } 01:17:06.114 ] 01:17:06.114 } 01:17:06.114 ] 01:17:06.114 } 01:17:06.114 [2024-12-09 05:11:48.331280] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:17:06.114 [2024-12-09 05:11:48.331426] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62025 ] 01:17:06.114 [2024-12-09 05:11:48.482547] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:17:06.114 [2024-12-09 05:11:48.536832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:17:06.376 [2024-12-09 05:11:48.578427] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:17:06.376 [2024-12-09 05:11:48.634896] spdk_dd.c:1102:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input 01:17:06.376 [2024-12-09 05:11:48.634952] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:17:06.376 [2024-12-09 05:11:48.733690] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 01:17:06.639 05:11:48 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # es=228 01:17:06.639 05:11:48 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:17:06.639 05:11:48 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@664 -- # es=100 01:17:06.639 ************************************ 01:17:06.639 END TEST dd_invalid_skip 01:17:06.639 ************************************ 01:17:06.639 05:11:48 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@665 -- # case "$es" in 01:17:06.639 05:11:48 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@672 -- # es=1 01:17:06.639 05:11:48 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:17:06.639 01:17:06.639 real 0m0.578s 01:17:06.639 user 0m0.401s 01:17:06.639 sys 0m0.144s 01:17:06.639 05:11:48 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1130 -- # xtrace_disable 01:17:06.639 05:11:48 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 01:17:06.639 05:11:48 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count 01:17:06.639 05:11:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:17:06.639 05:11:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 01:17:06.639 05:11:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 01:17:06.639 ************************************ 01:17:06.639 START TEST dd_invalid_input_count 01:17:06.639 ************************************ 01:17:06.639 05:11:48 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1129 -- # invalid_input_count 01:17:06.639 05:11:48 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 01:17:06.639 05:11:48 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 01:17:06.639 05:11:48 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0 01:17:06.639 05:11:48 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 01:17:06.639 05:11:48 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 01:17:06.639 05:11:48 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1 01:17:06.639 05:11:48 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 01:17:06.639 05:11:48 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf 01:17:06.639 05:11:48 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # local es=0 01:17:06.639 05:11:48 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable 01:17:06.640 05:11:48 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 01:17:06.640 05:11:48 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 01:17:06.640 05:11:48 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:17:06.640 05:11:48 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:17:06.640 05:11:48 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:17:06.640 05:11:48 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:17:06.640 05:11:48 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:17:06.640 05:11:48 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:17:06.640 05:11:48 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:17:06.640 05:11:48 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 01:17:06.640 05:11:48 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 01:17:06.640 [2024-12-09 05:11:48.974062] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:17:06.640 [2024-12-09 05:11:48.974215] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62063 ] 01:17:06.640 { 01:17:06.640 "subsystems": [ 01:17:06.640 { 01:17:06.640 "subsystem": "bdev", 01:17:06.640 "config": [ 01:17:06.640 { 01:17:06.640 "params": { 01:17:06.640 "block_size": 512, 01:17:06.640 "num_blocks": 512, 01:17:06.640 "name": "malloc0" 01:17:06.640 }, 01:17:06.640 "method": "bdev_malloc_create" 01:17:06.640 }, 01:17:06.640 { 01:17:06.640 "params": { 01:17:06.640 "block_size": 512, 01:17:06.640 "num_blocks": 512, 01:17:06.640 "name": "malloc1" 01:17:06.640 }, 01:17:06.640 "method": "bdev_malloc_create" 01:17:06.640 }, 01:17:06.640 { 01:17:06.640 "method": "bdev_wait_for_examine" 01:17:06.640 } 01:17:06.640 ] 01:17:06.640 } 01:17:06.640 ] 01:17:06.640 } 01:17:06.905 [2024-12-09 05:11:49.127801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:17:06.905 [2024-12-09 05:11:49.180231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:17:06.905 [2024-12-09 05:11:49.223987] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:17:06.905 [2024-12-09 05:11:49.281194] spdk_dd.c:1110:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input 01:17:06.905 [2024-12-09 05:11:49.281377] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:17:07.165 [2024-12-09 05:11:49.381116] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 01:17:07.165 05:11:49 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # es=228 01:17:07.165 05:11:49 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:17:07.165 05:11:49 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@664 -- # es=100 01:17:07.165 05:11:49 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@665 -- # case "$es" in 01:17:07.165 05:11:49 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@672 -- # es=1 01:17:07.165 05:11:49 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:17:07.165 01:17:07.165 real 0m0.576s 01:17:07.165 user 0m0.390s 01:17:07.165 sys 0m0.153s 01:17:07.165 05:11:49 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1130 -- # xtrace_disable 01:17:07.165 ************************************ 01:17:07.165 END TEST dd_invalid_input_count 01:17:07.165 ************************************ 01:17:07.165 05:11:49 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 01:17:07.165 05:11:49 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count 01:17:07.165 05:11:49 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:17:07.165 05:11:49 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 01:17:07.165 05:11:49 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 01:17:07.165 ************************************ 01:17:07.165 START TEST dd_invalid_output_count 01:17:07.165 ************************************ 01:17:07.165 05:11:49 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1129 -- # invalid_output_count 01:17:07.165 05:11:49 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 01:17:07.165 05:11:49 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 01:17:07.165 05:11:49 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0 01:17:07.165 05:11:49 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 01:17:07.165 05:11:49 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # local es=0 01:17:07.165 05:11:49 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf 01:17:07.165 05:11:49 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 01:17:07.165 05:11:49 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable 01:17:07.165 05:11:49 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:17:07.165 05:11:49 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 01:17:07.165 05:11:49 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:17:07.165 05:11:49 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:17:07.165 05:11:49 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:17:07.165 05:11:49 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:17:07.165 05:11:49 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:17:07.165 05:11:49 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:17:07.165 05:11:49 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 01:17:07.165 05:11:49 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 01:17:07.165 { 01:17:07.165 "subsystems": [ 01:17:07.165 { 01:17:07.165 "subsystem": "bdev", 01:17:07.165 "config": [ 01:17:07.165 { 01:17:07.165 "params": { 01:17:07.165 "block_size": 512, 01:17:07.165 "num_blocks": 512, 01:17:07.165 "name": "malloc0" 01:17:07.165 }, 01:17:07.165 "method": "bdev_malloc_create" 01:17:07.165 }, 01:17:07.165 { 01:17:07.165 "method": "bdev_wait_for_examine" 01:17:07.165 } 01:17:07.165 ] 01:17:07.165 } 01:17:07.165 ] 01:17:07.165 } 01:17:07.165 [2024-12-09 05:11:49.609275] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:17:07.165 [2024-12-09 05:11:49.609349] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62098 ] 01:17:07.425 [2024-12-09 05:11:49.762041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:17:07.425 [2024-12-09 05:11:49.810546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:17:07.425 [2024-12-09 05:11:49.856063] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:17:07.685 [2024-12-09 05:11:49.904959] spdk_dd.c:1152:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output 01:17:07.685 [2024-12-09 05:11:49.905033] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:17:07.685 [2024-12-09 05:11:50.003554] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 01:17:07.685 05:11:50 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # es=228 01:17:07.685 05:11:50 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:17:07.685 05:11:50 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@664 -- # es=100 01:17:07.685 05:11:50 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@665 -- # case "$es" in 01:17:07.685 05:11:50 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@672 -- # es=1 01:17:07.685 ************************************ 01:17:07.685 END TEST dd_invalid_output_count 01:17:07.685 ************************************ 01:17:07.685 05:11:50 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:17:07.685 01:17:07.685 real 0m0.556s 01:17:07.685 user 0m0.367s 01:17:07.685 sys 0m0.145s 01:17:07.685 05:11:50 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1130 -- # xtrace_disable 01:17:07.685 05:11:50 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 01:17:07.944 05:11:50 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple 01:17:07.944 05:11:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:17:07.944 05:11:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 01:17:07.944 05:11:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 01:17:07.944 ************************************ 01:17:07.944 START TEST dd_bs_not_multiple 01:17:07.944 ************************************ 01:17:07.944 05:11:50 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1129 -- # bs_not_multiple 01:17:07.944 05:11:50 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 01:17:07.944 05:11:50 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 01:17:07.944 05:11:50 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0 01:17:07.944 05:11:50 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 01:17:07.944 05:11:50 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 01:17:07.944 05:11:50 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1 01:17:07.944 05:11:50 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 01:17:07.944 05:11:50 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf 01:17:07.944 05:11:50 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # local es=0 01:17:07.944 05:11:50 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable 01:17:07.944 05:11:50 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 01:17:07.944 05:11:50 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 01:17:07.944 05:11:50 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:17:07.944 05:11:50 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:17:07.944 05:11:50 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:17:07.944 05:11:50 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:17:07.944 05:11:50 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:17:07.944 05:11:50 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:17:07.944 05:11:50 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 01:17:07.944 05:11:50 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 01:17:07.945 05:11:50 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 01:17:07.945 { 01:17:07.945 "subsystems": [ 01:17:07.945 { 01:17:07.945 "subsystem": "bdev", 01:17:07.945 "config": [ 01:17:07.945 { 01:17:07.945 "params": { 01:17:07.945 "block_size": 512, 01:17:07.945 "num_blocks": 512, 01:17:07.945 "name": "malloc0" 01:17:07.945 }, 01:17:07.945 "method": "bdev_malloc_create" 01:17:07.945 }, 01:17:07.945 { 01:17:07.945 "params": { 01:17:07.945 "block_size": 512, 01:17:07.945 "num_blocks": 512, 01:17:07.945 "name": "malloc1" 01:17:07.945 }, 01:17:07.945 "method": "bdev_malloc_create" 01:17:07.945 }, 01:17:07.945 { 01:17:07.945 "method": "bdev_wait_for_examine" 01:17:07.945 } 01:17:07.945 ] 01:17:07.945 } 01:17:07.945 ] 01:17:07.945 } 01:17:07.945 [2024-12-09 05:11:50.244286] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:17:07.945 [2024-12-09 05:11:50.244366] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62129 ] 01:17:07.945 [2024-12-09 05:11:50.396961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:17:08.204 [2024-12-09 05:11:50.449769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:17:08.204 [2024-12-09 05:11:50.493763] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:17:08.204 [2024-12-09 05:11:50.551116] spdk_dd.c:1168:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512) 01:17:08.204 [2024-12-09 05:11:50.551181] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:17:08.204 [2024-12-09 05:11:50.650964] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 01:17:08.463 05:11:50 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # es=234 01:17:08.463 05:11:50 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:17:08.463 05:11:50 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@664 -- # es=106 01:17:08.463 05:11:50 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@665 -- # case "$es" in 01:17:08.463 05:11:50 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@672 -- # es=1 01:17:08.463 ************************************ 01:17:08.463 END TEST dd_bs_not_multiple 01:17:08.463 ************************************ 01:17:08.463 05:11:50 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:17:08.463 01:17:08.463 real 0m0.574s 01:17:08.463 user 0m0.390s 01:17:08.463 sys 0m0.147s 01:17:08.463 05:11:50 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1130 -- # xtrace_disable 01:17:08.463 05:11:50 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 01:17:08.463 01:17:08.463 real 0m6.760s 01:17:08.463 user 0m3.650s 01:17:08.463 sys 0m2.643s 01:17:08.463 05:11:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1130 -- # xtrace_disable 01:17:08.463 05:11:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 01:17:08.463 ************************************ 01:17:08.463 END TEST spdk_dd_negative 01:17:08.463 ************************************ 01:17:08.463 01:17:08.463 real 1m13.330s 01:17:08.463 user 0m46.882s 01:17:08.463 sys 0m30.885s 01:17:08.463 05:11:50 spdk_dd -- common/autotest_common.sh@1130 -- # xtrace_disable 01:17:08.463 05:11:50 spdk_dd -- common/autotest_common.sh@10 -- # set +x 01:17:08.463 ************************************ 01:17:08.463 END TEST spdk_dd 01:17:08.463 ************************************ 01:17:08.721 05:11:50 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 01:17:08.721 05:11:50 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 01:17:08.721 05:11:50 -- spdk/autotest.sh@260 -- # timing_exit lib 01:17:08.721 05:11:50 -- common/autotest_common.sh@732 -- # xtrace_disable 01:17:08.721 05:11:50 -- common/autotest_common.sh@10 -- # set +x 01:17:08.721 05:11:50 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 01:17:08.721 05:11:50 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 01:17:08.721 05:11:50 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 01:17:08.721 05:11:50 -- spdk/autotest.sh@277 -- # export NET_TYPE 01:17:08.721 05:11:50 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 01:17:08.721 05:11:50 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 01:17:08.721 05:11:50 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 01:17:08.721 05:11:50 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:17:08.721 05:11:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:17:08.721 05:11:50 -- common/autotest_common.sh@10 -- # set +x 01:17:08.721 ************************************ 01:17:08.721 START TEST nvmf_tcp 01:17:08.721 ************************************ 01:17:08.721 05:11:50 nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 01:17:08.722 * Looking for test storage... 01:17:08.722 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 01:17:08.722 05:11:51 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:17:08.722 05:11:51 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 01:17:08.722 05:11:51 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:17:08.981 05:11:51 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:17:08.981 05:11:51 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:17:08.981 05:11:51 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 01:17:08.981 05:11:51 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 01:17:08.981 05:11:51 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 01:17:08.981 05:11:51 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 01:17:08.981 05:11:51 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 01:17:08.981 05:11:51 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 01:17:08.981 05:11:51 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 01:17:08.981 05:11:51 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 01:17:08.981 05:11:51 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 01:17:08.981 05:11:51 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:17:08.981 05:11:51 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 01:17:08.981 05:11:51 nvmf_tcp -- scripts/common.sh@345 -- # : 1 01:17:08.981 05:11:51 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 01:17:08.981 05:11:51 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:17:08.981 05:11:51 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 01:17:08.981 05:11:51 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 01:17:08.981 05:11:51 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:17:08.981 05:11:51 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 01:17:08.981 05:11:51 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 01:17:08.981 05:11:51 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 01:17:08.981 05:11:51 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 01:17:08.981 05:11:51 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:17:08.981 05:11:51 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 01:17:08.981 05:11:51 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 01:17:08.981 05:11:51 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:17:08.981 05:11:51 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:17:08.981 05:11:51 nvmf_tcp -- scripts/common.sh@368 -- # return 0 01:17:08.981 05:11:51 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:17:08.981 05:11:51 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:17:08.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:17:08.981 --rc genhtml_branch_coverage=1 01:17:08.981 --rc genhtml_function_coverage=1 01:17:08.981 --rc genhtml_legend=1 01:17:08.981 --rc geninfo_all_blocks=1 01:17:08.981 --rc geninfo_unexecuted_blocks=1 01:17:08.981 01:17:08.981 ' 01:17:08.981 05:11:51 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:17:08.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:17:08.981 --rc genhtml_branch_coverage=1 01:17:08.981 --rc genhtml_function_coverage=1 01:17:08.981 --rc genhtml_legend=1 01:17:08.981 --rc geninfo_all_blocks=1 01:17:08.981 --rc geninfo_unexecuted_blocks=1 01:17:08.981 01:17:08.981 ' 01:17:08.981 05:11:51 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:17:08.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:17:08.981 --rc genhtml_branch_coverage=1 01:17:08.981 --rc genhtml_function_coverage=1 01:17:08.981 --rc genhtml_legend=1 01:17:08.981 --rc geninfo_all_blocks=1 01:17:08.981 --rc geninfo_unexecuted_blocks=1 01:17:08.981 01:17:08.981 ' 01:17:08.981 05:11:51 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:17:08.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:17:08.981 --rc genhtml_branch_coverage=1 01:17:08.981 --rc genhtml_function_coverage=1 01:17:08.981 --rc genhtml_legend=1 01:17:08.981 --rc geninfo_all_blocks=1 01:17:08.981 --rc geninfo_unexecuted_blocks=1 01:17:08.981 01:17:08.981 ' 01:17:08.981 05:11:51 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 01:17:08.981 05:11:51 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 01:17:08.981 05:11:51 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 01:17:08.981 05:11:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:17:08.981 05:11:51 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 01:17:08.981 05:11:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:17:08.981 ************************************ 01:17:08.981 START TEST nvmf_target_core 01:17:08.981 ************************************ 01:17:08.981 05:11:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 01:17:08.981 * Looking for test storage... 01:17:08.981 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 01:17:08.981 05:11:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:17:08.981 05:11:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 01:17:08.981 05:11:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:17:08.981 05:11:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:17:09.242 05:11:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:17:09.242 05:11:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 01:17:09.242 05:11:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 01:17:09.242 05:11:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 01:17:09.242 05:11:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 01:17:09.242 05:11:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 01:17:09.242 05:11:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 01:17:09.242 05:11:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 01:17:09.242 05:11:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 01:17:09.242 05:11:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 01:17:09.242 05:11:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:17:09.242 05:11:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 01:17:09.242 05:11:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 01:17:09.242 05:11:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 01:17:09.242 05:11:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:17:09.242 05:11:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 01:17:09.242 05:11:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 01:17:09.242 05:11:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:17:09.242 05:11:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 01:17:09.242 05:11:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 01:17:09.242 05:11:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 01:17:09.242 05:11:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 01:17:09.242 05:11:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:17:09.242 05:11:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 01:17:09.242 05:11:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 01:17:09.242 05:11:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:17:09.242 05:11:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:17:09.242 05:11:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 01:17:09.242 05:11:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:17:09.242 05:11:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:17:09.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:17:09.242 --rc genhtml_branch_coverage=1 01:17:09.242 --rc genhtml_function_coverage=1 01:17:09.242 --rc genhtml_legend=1 01:17:09.242 --rc geninfo_all_blocks=1 01:17:09.242 --rc geninfo_unexecuted_blocks=1 01:17:09.242 01:17:09.242 ' 01:17:09.242 05:11:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:17:09.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:17:09.242 --rc genhtml_branch_coverage=1 01:17:09.242 --rc genhtml_function_coverage=1 01:17:09.242 --rc genhtml_legend=1 01:17:09.242 --rc geninfo_all_blocks=1 01:17:09.242 --rc geninfo_unexecuted_blocks=1 01:17:09.242 01:17:09.242 ' 01:17:09.242 05:11:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:17:09.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:17:09.242 --rc genhtml_branch_coverage=1 01:17:09.242 --rc genhtml_function_coverage=1 01:17:09.242 --rc genhtml_legend=1 01:17:09.242 --rc geninfo_all_blocks=1 01:17:09.242 --rc geninfo_unexecuted_blocks=1 01:17:09.242 01:17:09.242 ' 01:17:09.242 05:11:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:17:09.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:17:09.242 --rc genhtml_branch_coverage=1 01:17:09.242 --rc genhtml_function_coverage=1 01:17:09.242 --rc genhtml_legend=1 01:17:09.242 --rc geninfo_all_blocks=1 01:17:09.242 --rc geninfo_unexecuted_blocks=1 01:17:09.242 01:17:09.242 ' 01:17:09.242 05:11:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 01:17:09.242 05:11:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 01:17:09.242 05:11:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:17:09.242 05:11:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 01:17:09.242 05:11:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:17:09.242 05:11:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:17:09.242 05:11:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:17:09.242 05:11:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:17:09.242 05:11:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:17:09.242 05:11:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:17:09.242 05:11:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:17:09.242 05:11:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:17:09.242 05:11:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:17:09.242 05:11:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:17:09.242 05:11:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:17:09.242 05:11:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:17:09.242 05:11:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:17:09.242 05:11:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:17:09.242 05:11:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:17:09.242 05:11:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:17:09.242 05:11:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:17:09.242 05:11:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 01:17:09.242 05:11:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:17:09.242 05:11:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:17:09.242 05:11:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:17:09.243 05:11:51 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:17:09.243 05:11:51 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:17:09.243 05:11:51 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:17:09.243 05:11:51 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 01:17:09.243 05:11:51 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:17:09.243 05:11:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 01:17:09.243 05:11:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:17:09.243 05:11:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:17:09.243 05:11:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:17:09.243 05:11:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:17:09.243 05:11:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:17:09.243 05:11:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:17:09.243 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:17:09.243 05:11:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:17:09.243 05:11:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:17:09.243 05:11:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 01:17:09.243 05:11:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 01:17:09.243 05:11:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 01:17:09.243 05:11:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 01:17:09.243 05:11:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 01:17:09.243 05:11:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:17:09.243 05:11:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 01:17:09.243 05:11:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 01:17:09.243 ************************************ 01:17:09.243 START TEST nvmf_host_management 01:17:09.243 ************************************ 01:17:09.243 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 01:17:09.243 * Looking for test storage... 01:17:09.243 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:17:09.243 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:17:09.243 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 01:17:09.243 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:17:09.503 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:17:09.503 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:17:09.503 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 01:17:09.504 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 01:17:09.504 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 01:17:09.504 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 01:17:09.504 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 01:17:09.504 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 01:17:09.504 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 01:17:09.504 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 01:17:09.504 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 01:17:09.504 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:17:09.504 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 01:17:09.504 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 01:17:09.504 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 01:17:09.504 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:17:09.504 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 01:17:09.504 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 01:17:09.504 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:17:09.504 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 01:17:09.504 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 01:17:09.504 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 01:17:09.504 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 01:17:09.504 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:17:09.504 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 01:17:09.504 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 01:17:09.504 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:17:09.504 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:17:09.504 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 01:17:09.504 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:17:09.504 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:17:09.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:17:09.504 --rc genhtml_branch_coverage=1 01:17:09.504 --rc genhtml_function_coverage=1 01:17:09.504 --rc genhtml_legend=1 01:17:09.504 --rc geninfo_all_blocks=1 01:17:09.504 --rc geninfo_unexecuted_blocks=1 01:17:09.504 01:17:09.504 ' 01:17:09.504 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:17:09.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:17:09.504 --rc genhtml_branch_coverage=1 01:17:09.504 --rc genhtml_function_coverage=1 01:17:09.504 --rc genhtml_legend=1 01:17:09.504 --rc geninfo_all_blocks=1 01:17:09.504 --rc geninfo_unexecuted_blocks=1 01:17:09.504 01:17:09.504 ' 01:17:09.504 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:17:09.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:17:09.504 --rc genhtml_branch_coverage=1 01:17:09.504 --rc genhtml_function_coverage=1 01:17:09.504 --rc genhtml_legend=1 01:17:09.504 --rc geninfo_all_blocks=1 01:17:09.504 --rc geninfo_unexecuted_blocks=1 01:17:09.504 01:17:09.504 ' 01:17:09.504 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:17:09.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:17:09.504 --rc genhtml_branch_coverage=1 01:17:09.504 --rc genhtml_function_coverage=1 01:17:09.504 --rc genhtml_legend=1 01:17:09.504 --rc geninfo_all_blocks=1 01:17:09.504 --rc geninfo_unexecuted_blocks=1 01:17:09.504 01:17:09.504 ' 01:17:09.504 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:17:09.504 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 01:17:09.504 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:17:09.504 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:17:09.504 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:17:09.504 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:17:09.504 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:17:09.504 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:17:09.504 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:17:09.504 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:17:09.504 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:17:09.504 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:17:09.504 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:17:09.504 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:17:09.504 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:17:09.504 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:17:09.504 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:17:09.504 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:17:09.504 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:17:09.504 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 01:17:09.504 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:17:09.504 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:17:09.504 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:17:09.504 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:17:09.504 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:17:09.504 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:17:09.504 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 01:17:09.504 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:17:09.504 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 01:17:09.504 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:17:09.504 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:17:09.504 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:17:09.504 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:17:09.504 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:17:09.504 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:17:09.504 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:17:09.504 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:17:09.504 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:17:09.504 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 01:17:09.504 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 01:17:09.504 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:17:09.504 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 01:17:09.504 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:17:09.504 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:17:09.505 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 01:17:09.505 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 01:17:09.505 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 01:17:09.505 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:17:09.505 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:17:09.505 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:17:09.505 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:17:09.505 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:17:09.505 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:17:09.505 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:17:09.505 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:17:09.505 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 01:17:09.505 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:17:09.505 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:17:09.505 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:17:09.505 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:17:09.505 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:17:09.505 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:17:09.505 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:17:09.505 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:17:09.505 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:17:09.505 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:17:09.505 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:17:09.505 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:17:09.505 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:17:09.505 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:17:09.505 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:17:09.505 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:17:09.505 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:17:09.505 Cannot find device "nvmf_init_br" 01:17:09.505 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 01:17:09.505 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:17:09.505 Cannot find device "nvmf_init_br2" 01:17:09.505 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 01:17:09.505 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:17:09.505 Cannot find device "nvmf_tgt_br" 01:17:09.505 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 01:17:09.505 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:17:09.505 Cannot find device "nvmf_tgt_br2" 01:17:09.505 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 01:17:09.505 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:17:09.505 Cannot find device "nvmf_init_br" 01:17:09.505 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 01:17:09.505 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:17:09.505 Cannot find device "nvmf_init_br2" 01:17:09.505 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 01:17:09.505 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:17:09.505 Cannot find device "nvmf_tgt_br" 01:17:09.505 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 01:17:09.505 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:17:09.505 Cannot find device "nvmf_tgt_br2" 01:17:09.505 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 01:17:09.505 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:17:09.505 Cannot find device "nvmf_br" 01:17:09.505 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 01:17:09.505 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:17:09.764 Cannot find device "nvmf_init_if" 01:17:09.764 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 01:17:09.764 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:17:09.764 Cannot find device "nvmf_init_if2" 01:17:09.764 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 01:17:09.764 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:17:09.764 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:17:09.764 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 01:17:09.764 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:17:09.764 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:17:09.764 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 01:17:09.764 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:17:09.764 05:11:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:17:09.764 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:17:09.764 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:17:09.764 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:17:09.764 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:17:09.764 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:17:09.764 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:17:09.764 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:17:09.765 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:17:09.765 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:17:09.765 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:17:09.765 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:17:09.765 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:17:09.765 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:17:09.765 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:17:09.765 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:17:09.765 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:17:09.765 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:17:09.765 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:17:09.765 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:17:09.765 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:17:09.765 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:17:09.765 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:17:09.765 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:17:09.765 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:17:10.024 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:17:10.024 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:17:10.024 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:17:10.024 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:17:10.024 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:17:10.024 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:17:10.024 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:17:10.024 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:17:10.024 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 01:17:10.024 01:17:10.024 --- 10.0.0.3 ping statistics --- 01:17:10.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:17:10.024 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 01:17:10.024 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:17:10.024 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:17:10.024 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.113 ms 01:17:10.024 01:17:10.024 --- 10.0.0.4 ping statistics --- 01:17:10.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:17:10.024 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 01:17:10.024 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:17:10.024 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:17:10.024 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.015 ms 01:17:10.024 01:17:10.024 --- 10.0.0.1 ping statistics --- 01:17:10.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:17:10.024 rtt min/avg/max/mdev = 0.015/0.015/0.015/0.000 ms 01:17:10.024 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:17:10.024 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:17:10.024 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 01:17:10.024 01:17:10.024 --- 10.0.0.2 ping statistics --- 01:17:10.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:17:10.024 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 01:17:10.024 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:17:10.024 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 01:17:10.024 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:17:10.024 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:17:10.024 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:17:10.024 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:17:10.024 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:17:10.024 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:17:10.024 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:17:10.024 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 01:17:10.024 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 01:17:10.024 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 01:17:10.024 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:17:10.024 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 01:17:10.024 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:17:10.024 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=62477 01:17:10.024 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 01:17:10.024 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 62477 01:17:10.024 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 62477 ']' 01:17:10.024 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:17:10.024 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 01:17:10.024 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:17:10.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:17:10.024 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 01:17:10.024 05:11:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:17:10.024 [2024-12-09 05:11:52.405895] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:17:10.024 [2024-12-09 05:11:52.405970] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:17:10.284 [2024-12-09 05:11:52.568678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:17:10.284 [2024-12-09 05:11:52.655576] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:17:10.284 [2024-12-09 05:11:52.655635] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:17:10.284 [2024-12-09 05:11:52.655644] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:17:10.284 [2024-12-09 05:11:52.655650] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:17:10.284 [2024-12-09 05:11:52.655656] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:17:10.284 [2024-12-09 05:11:52.657181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:17:10.284 [2024-12-09 05:11:52.657341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:17:10.284 [2024-12-09 05:11:52.657452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 01:17:10.284 [2024-12-09 05:11:52.657453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:17:10.543 [2024-12-09 05:11:52.740158] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:17:11.111 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:17:11.111 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 01:17:11.111 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:17:11.111 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 01:17:11.111 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:17:11.111 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:17:11.111 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:17:11.111 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 01:17:11.111 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:17:11.111 [2024-12-09 05:11:53.428526] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:17:11.111 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:17:11.111 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 01:17:11.111 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 01:17:11.111 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:17:11.111 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 01:17:11.111 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 01:17:11.111 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 01:17:11.111 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 01:17:11.111 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:17:11.111 Malloc0 01:17:11.111 [2024-12-09 05:11:53.529109] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:17:11.111 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:17:11.111 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 01:17:11.111 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 01:17:11.111 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:17:11.371 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=62537 01:17:11.371 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 62537 /var/tmp/bdevperf.sock 01:17:11.371 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 62537 ']' 01:17:11.371 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:17:11.371 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 01:17:11.371 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 01:17:11.371 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 01:17:11.371 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 01:17:11.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:17:11.371 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:17:11.371 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 01:17:11.371 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 01:17:11.371 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:17:11.371 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:17:11.371 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:17:11.371 { 01:17:11.371 "params": { 01:17:11.371 "name": "Nvme$subsystem", 01:17:11.371 "trtype": "$TEST_TRANSPORT", 01:17:11.371 "traddr": "$NVMF_FIRST_TARGET_IP", 01:17:11.371 "adrfam": "ipv4", 01:17:11.371 "trsvcid": "$NVMF_PORT", 01:17:11.371 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:17:11.371 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:17:11.371 "hdgst": ${hdgst:-false}, 01:17:11.371 "ddgst": ${ddgst:-false} 01:17:11.371 }, 01:17:11.371 "method": "bdev_nvme_attach_controller" 01:17:11.371 } 01:17:11.371 EOF 01:17:11.371 )") 01:17:11.371 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 01:17:11.371 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 01:17:11.371 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 01:17:11.371 05:11:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:17:11.371 "params": { 01:17:11.371 "name": "Nvme0", 01:17:11.371 "trtype": "tcp", 01:17:11.371 "traddr": "10.0.0.3", 01:17:11.371 "adrfam": "ipv4", 01:17:11.371 "trsvcid": "4420", 01:17:11.371 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:17:11.371 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:17:11.371 "hdgst": false, 01:17:11.371 "ddgst": false 01:17:11.371 }, 01:17:11.371 "method": "bdev_nvme_attach_controller" 01:17:11.371 }' 01:17:11.371 [2024-12-09 05:11:53.649676] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:17:11.372 [2024-12-09 05:11:53.649806] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62537 ] 01:17:11.372 [2024-12-09 05:11:53.802304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:17:11.631 [2024-12-09 05:11:53.855205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:17:11.631 [2024-12-09 05:11:53.905857] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:17:11.631 Running I/O for 10 seconds... 01:17:12.201 05:11:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:17:12.201 05:11:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 01:17:12.201 05:11:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 01:17:12.201 05:11:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 01:17:12.201 05:11:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:17:12.201 05:11:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:17:12.201 05:11:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:17:12.201 05:11:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 01:17:12.201 05:11:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 01:17:12.201 05:11:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 01:17:12.201 05:11:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 01:17:12.201 05:11:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 01:17:12.201 05:11:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 01:17:12.201 05:11:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 01:17:12.201 05:11:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 01:17:12.201 05:11:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 01:17:12.201 05:11:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 01:17:12.201 05:11:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:17:12.201 05:11:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:17:12.201 05:11:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=963 01:17:12.201 05:11:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 963 -ge 100 ']' 01:17:12.201 05:11:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 01:17:12.201 05:11:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 01:17:12.201 05:11:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 01:17:12.201 05:11:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 01:17:12.201 05:11:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 01:17:12.201 05:11:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:17:12.201 [2024-12-09 05:11:54.620662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb24e50 is same with the state(6) to be set 01:17:12.201 [2024-12-09 05:11:54.620863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb24e50 is same with the state(6) to be set 01:17:12.201 [2024-12-09 05:11:54.620957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb24e50 is same with the state(6) to be set 01:17:12.201 [2024-12-09 05:11:54.621027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb24e50 is same with the state(6) to be set 01:17:12.201 [2024-12-09 05:11:54.621095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb24e50 is same with the state(6) to be set 01:17:12.201 [2024-12-09 05:11:54.621170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb24e50 is same with the state(6) to be set 01:17:12.201 [2024-12-09 05:11:54.621238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb24e50 is same with the state(6) to be set 01:17:12.201 [2024-12-09 05:11:54.621311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb24e50 is same with the state(6) to be set 01:17:12.201 [2024-12-09 05:11:54.621401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb24e50 is same with the state(6) to be set 01:17:12.201 [2024-12-09 05:11:54.621480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb24e50 is same with the state(6) to be set 01:17:12.201 [2024-12-09 05:11:54.621543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb24e50 is same with the state(6) to be set 01:17:12.201 [2024-12-09 05:11:54.621604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb24e50 is same with the state(6) to be set 01:17:12.201 [2024-12-09 05:11:54.621672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb24e50 is same with the state(6) to be set 01:17:12.201 [2024-12-09 05:11:54.621721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb24e50 is same with the state(6) to be set 01:17:12.201 [2024-12-09 05:11:54.621787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb24e50 is same with the state(6) to be set 01:17:12.201 [2024-12-09 05:11:54.621836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb24e50 is same with the state(6) to be set 01:17:12.201 [2024-12-09 05:11:54.621902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb24e50 is same with the state(6) to be set 01:17:12.201 [2024-12-09 05:11:54.621951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb24e50 is same with the state(6) to be set 01:17:12.201 [2024-12-09 05:11:54.622015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb24e50 is same with the state(6) to be set 01:17:12.201 [2024-12-09 05:11:54.622076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb24e50 is same with the state(6) to be set 01:17:12.201 [2024-12-09 05:11:54.622133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb24e50 is same with the state(6) to be set 01:17:12.201 [2024-12-09 05:11:54.622190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb24e50 is same with the state(6) to be set 01:17:12.201 [2024-12-09 05:11:54.622247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb24e50 is same with the state(6) to be set 01:17:12.201 [2024-12-09 05:11:54.622303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb24e50 is same with the state(6) to be set 01:17:12.201 [2024-12-09 05:11:54.622377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb24e50 is same with the state(6) to be set 01:17:12.201 [2024-12-09 05:11:54.622436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb24e50 is same with the state(6) to be set 01:17:12.201 [2024-12-09 05:11:54.622493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb24e50 is same with the state(6) to be set 01:17:12.201 [2024-12-09 05:11:54.622549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb24e50 is same with the state(6) to be set 01:17:12.201 [2024-12-09 05:11:54.622605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb24e50 is same with the state(6) to be set 01:17:12.201 [2024-12-09 05:11:54.622660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb24e50 is same with the state(6) to be set 01:17:12.201 [2024-12-09 05:11:54.622715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb24e50 is same with the state(6) to be set 01:17:12.202 [2024-12-09 05:11:54.622771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb24e50 is same with the state(6) to be set 01:17:12.202 [2024-12-09 05:11:54.622826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb24e50 is same with the state(6) to be set 01:17:12.202 [2024-12-09 05:11:54.622884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb24e50 is same with the state(6) to be set 01:17:12.202 [2024-12-09 05:11:54.622938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb24e50 is same with the state(6) to be set 01:17:12.202 [2024-12-09 05:11:54.622974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb24e50 is same with the state(6) to be set 01:17:12.202 [2024-12-09 05:11:54.622982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb24e50 is same with the state(6) to be set 01:17:12.202 [2024-12-09 05:11:54.622989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb24e50 is same with the state(6) to be set 01:17:12.202 [2024-12-09 05:11:54.622995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb24e50 is same with the state(6) to be set 01:17:12.202 [2024-12-09 05:11:54.623002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb24e50 is same with the state(6) to be set 01:17:12.202 [2024-12-09 05:11:54.623009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb24e50 is same with the state(6) to be set 01:17:12.202 [2024-12-09 05:11:54.623024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb24e50 is same with the state(6) to be set 01:17:12.202 [2024-12-09 05:11:54.623031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb24e50 is same with the state(6) to be set 01:17:12.202 [2024-12-09 05:11:54.623054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb24e50 is same with the state(6) to be set 01:17:12.202 [2024-12-09 05:11:54.623063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb24e50 is same with the state(6) to be set 01:17:12.202 [2024-12-09 05:11:54.623070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb24e50 is same with the state(6) to be set 01:17:12.202 [2024-12-09 05:11:54.623085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb24e50 is same with the state(6) to be set 01:17:12.202 [2024-12-09 05:11:54.623092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb24e50 is same with the state(6) to be set 01:17:12.202 [2024-12-09 05:11:54.623100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb24e50 is same with the state(6) to be set 01:17:12.202 [2024-12-09 05:11:54.623107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb24e50 is same with the state(6) to be set 01:17:12.202 [2024-12-09 05:11:54.623114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb24e50 is same with the state(6) to be set 01:17:12.202 [2024-12-09 05:11:54.623121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb24e50 is same with the state(6) to be set 01:17:12.202 [2024-12-09 05:11:54.623129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb24e50 is same with the state(6) to be set 01:17:12.202 [2024-12-09 05:11:54.623137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb24e50 is same with the state(6) to be set 01:17:12.202 [2024-12-09 05:11:54.623144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb24e50 is same with the state(6) to be set 01:17:12.202 [2024-12-09 05:11:54.623152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb24e50 is same with the state(6) to be set 01:17:12.202 [2024-12-09 05:11:54.623159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb24e50 is same with the state(6) to be set 01:17:12.202 [2024-12-09 05:11:54.623165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb24e50 is same with the state(6) to be set 01:17:12.202 [2024-12-09 05:11:54.623172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb24e50 is same with the state(6) to be set 01:17:12.202 [2024-12-09 05:11:54.623179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb24e50 is same with the state(6) to be set 01:17:12.202 [2024-12-09 05:11:54.623186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb24e50 is same with the state(6) to be set 01:17:12.202 [2024-12-09 05:11:54.623193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb24e50 is same with the state(6) to be set 01:17:12.202 [2024-12-09 05:11:54.623200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb24e50 is same with the state(6) to be set 01:17:12.202 [2024-12-09 05:11:54.623296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:0 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:12.202 [2024-12-09 05:11:54.623326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:12.202 [2024-12-09 05:11:54.623356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:12.202 [2024-12-09 05:11:54.623368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:12.202 [2024-12-09 05:11:54.623377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:12.202 [2024-12-09 05:11:54.623384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:12.202 [2024-12-09 05:11:54.623393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:12.202 [2024-12-09 05:11:54.623400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:12.202 [2024-12-09 05:11:54.623408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:12.202 [2024-12-09 05:11:54.623415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:12.202 [2024-12-09 05:11:54.623424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:12.202 [2024-12-09 05:11:54.623430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:12.202 [2024-12-09 05:11:54.623440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:12.202 [2024-12-09 05:11:54.623446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:12.202 [2024-12-09 05:11:54.623454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:12.202 [2024-12-09 05:11:54.623461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:12.202 [2024-12-09 05:11:54.623494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:12.202 [2024-12-09 05:11:54.623590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:12.202 [2024-12-09 05:11:54.623649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:12.202 [2024-12-09 05:11:54.623716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:12.202 [2024-12-09 05:11:54.623790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:12.202 [2024-12-09 05:11:54.623845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:12.202 [2024-12-09 05:11:54.623864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:12.202 [2024-12-09 05:11:54.623871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:12.202 [2024-12-09 05:11:54.623886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:12.202 [2024-12-09 05:11:54.623894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:12.202 [2024-12-09 05:11:54.623903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:12.202 [2024-12-09 05:11:54.623909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:12.202 [2024-12-09 05:11:54.623918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:12.202 [2024-12-09 05:11:54.623928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:12.202 [2024-12-09 05:11:54.623937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:12.202 [2024-12-09 05:11:54.623943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:12.202 [2024-12-09 05:11:54.623952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:2048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:12.202 [2024-12-09 05:11:54.623958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:12.202 [2024-12-09 05:11:54.623967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:2176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:12.202 [2024-12-09 05:11:54.623974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:12.202 [2024-12-09 05:11:54.623983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:2304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:12.202 [2024-12-09 05:11:54.623990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:12.202 [2024-12-09 05:11:54.623998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:2432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:12.202 [2024-12-09 05:11:54.624004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:12.202 [2024-12-09 05:11:54.624013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:12.202 [2024-12-09 05:11:54.624019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:12.202 [2024-12-09 05:11:54.624027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:2688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:12.202 [2024-12-09 05:11:54.624033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:12.202 [2024-12-09 05:11:54.624042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:2816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:12.202 [2024-12-09 05:11:54.624049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:12.202 [2024-12-09 05:11:54.624057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:12.202 [2024-12-09 05:11:54.624065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:12.203 [2024-12-09 05:11:54.624074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:12.203 [2024-12-09 05:11:54.624081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:12.203 [2024-12-09 05:11:54.624089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:3200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:12.203 [2024-12-09 05:11:54.624096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:12.203 [2024-12-09 05:11:54.624104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:3328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:12.203 [2024-12-09 05:11:54.624110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:12.203 [2024-12-09 05:11:54.624119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:3456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:12.203 [2024-12-09 05:11:54.624125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:12.203 [2024-12-09 05:11:54.624134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:3584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:12.203 [2024-12-09 05:11:54.624140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:12.203 [2024-12-09 05:11:54.624148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:3712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:12.203 [2024-12-09 05:11:54.624155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:12.203 [2024-12-09 05:11:54.624163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:3840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:12.203 [2024-12-09 05:11:54.624171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:12.203 [2024-12-09 05:11:54.624179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:3968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:12.203 [2024-12-09 05:11:54.624186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:12.203 [2024-12-09 05:11:54.624194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:12.203 [2024-12-09 05:11:54.624201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:12.203 [2024-12-09 05:11:54.624209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:4224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:12.203 [2024-12-09 05:11:54.624215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:12.203 [2024-12-09 05:11:54.624224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:4352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:12.203 [2024-12-09 05:11:54.624230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:12.203 [2024-12-09 05:11:54.624238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:12.203 [2024-12-09 05:11:54.624244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:12.203 [2024-12-09 05:11:54.624254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:4608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:12.203 [2024-12-09 05:11:54.624271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:12.203 [2024-12-09 05:11:54.624279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:4736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:12.203 [2024-12-09 05:11:54.624285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:12.203 [2024-12-09 05:11:54.624294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:4864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:12.203 [2024-12-09 05:11:54.624300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:12.203 [2024-12-09 05:11:54.624308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:12.203 [2024-12-09 05:11:54.624313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:12.203 [2024-12-09 05:11:54.624321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:5120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:12.203 [2024-12-09 05:11:54.624327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:12.203 [2024-12-09 05:11:54.624391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:5248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:12.203 [2024-12-09 05:11:54.624431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:12.203 [2024-12-09 05:11:54.624479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:5376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:12.203 [2024-12-09 05:11:54.624527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:12.203 [2024-12-09 05:11:54.624583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:5504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:12.203 [2024-12-09 05:11:54.624642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:12.203 [2024-12-09 05:11:54.624703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:5632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:12.203 [2024-12-09 05:11:54.624755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:12.203 [2024-12-09 05:11:54.624820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:5760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:12.203 [2024-12-09 05:11:54.624874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:12.203 [2024-12-09 05:11:54.624931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:5888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:12.203 [2024-12-09 05:11:54.625016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:12.203 [2024-12-09 05:11:54.625066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:6016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:12.203 task offset: 0 on job bdev=Nvme0n1 fails 01:17:12.203 01:17:12.203 Latency(us) 01:17:12.203 [2024-12-09T05:11:54.659Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:17:12.203 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 01:17:12.203 Job: Nvme0n1 ended in about 0.61 seconds with error 01:17:12.203 Verification LBA range: start 0x0 length 0x400 01:17:12.203 Nvme0n1 : 0.61 1692.43 105.78 105.78 0.00 34847.16 6496.36 29992.02 01:17:12.203 [2024-12-09T05:11:54.659Z] =================================================================================================================== 01:17:12.203 [2024-12-09T05:11:54.659Z] Total : 1692.43 105.78 105.78 0.00 34847.16 6496.36 29992.02 01:17:12.203 [2024-12-09 05:11:54.625136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:12.203 [2024-12-09 05:11:54.625147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:6144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:12.203 [2024-12-09 05:11:54.625153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:12.203 [2024-12-09 05:11:54.625171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:6272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:12.203 [2024-12-09 05:11:54.625180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:12.203 [2024-12-09 05:11:54.625189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:6400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:12.203 [2024-12-09 05:11:54.625195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:12.203 [2024-12-09 05:11:54.625203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:6528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:12.203 [2024-12-09 05:11:54.625209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:12.203 [2024-12-09 05:11:54.625217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:6656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:12.203 [2024-12-09 05:11:54.625223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:12.203 [2024-12-09 05:11:54.625231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:6784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:12.203 [2024-12-09 05:11:54.625237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:12.203 [2024-12-09 05:11:54.625245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:6912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:12.203 05:11:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:17:12.203 [2024-12-09 05:11:54.625251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:12.203 [2024-12-09 05:11:54.625263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:7040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:12.203 [2024-12-09 05:11:54.625269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:12.203 [2024-12-09 05:11:54.625278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:7168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:12.203 [2024-12-09 05:11:54.625286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:12.203 [2024-12-09 05:11:54.625294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:7296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:12.203 [2024-12-09 05:11:54.625300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:12.203 [2024-12-09 05:11:54.625308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:7424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:12.203 [2024-12-09 05:11:54.625314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:12.203 [2024-12-09 05:11:54.625338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:7552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:12.203 [2024-12-09 05:11:54.625346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:12.204 [2024-12-09 05:11:54.625355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:7680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:12.204 [2024-12-09 05:11:54.625361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:12.204 [2024-12-09 05:11:54.625369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:7808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:12.204 [2024-12-09 05:11:54.625375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:12.204 [2024-12-09 05:11:54.625383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:7936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:12.204 [2024-12-09 05:11:54.625391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:12.204 [2024-12-09 05:11:54.625399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:17:12.204 [2024-12-09 05:11:54.625405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:12.204 [2024-12-09 05:11:54.625412] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11632d0 is same with the state(6) to be set 01:17:12.204 [2024-12-09 05:11:54.625553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:17:12.204 [2024-12-09 05:11:54.625565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:12.204 [2024-12-09 05:11:54.625573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:17:12.204 [2024-12-09 05:11:54.625579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:12.204 [2024-12-09 05:11:54.625586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:17:12.204 [2024-12-09 05:11:54.625592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:12.204 [2024-12-09 05:11:54.625599] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:17:12.204 [2024-12-09 05:11:54.625605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:17:12.204 [2024-12-09 05:11:54.625611] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1168ce0 is same with the state(6) to be set 01:17:12.204 05:11:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 01:17:12.204 05:11:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 01:17:12.204 05:11:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:17:12.204 [2024-12-09 05:11:54.626697] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 01:17:12.204 [2024-12-09 05:11:54.628925] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:17:12.204 [2024-12-09 05:11:54.628974] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1168ce0 (9): Bad file descriptor 01:17:12.204 05:11:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:17:12.204 05:11:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 01:17:12.204 [2024-12-09 05:11:54.639848] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 01:17:13.601 05:11:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 62537 01:17:13.601 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (62537) - No such process 01:17:13.601 05:11:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 01:17:13.601 05:11:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 01:17:13.601 05:11:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 01:17:13.601 05:11:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 01:17:13.601 05:11:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 01:17:13.602 05:11:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 01:17:13.602 05:11:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:17:13.602 05:11:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:17:13.602 { 01:17:13.602 "params": { 01:17:13.602 "name": "Nvme$subsystem", 01:17:13.602 "trtype": "$TEST_TRANSPORT", 01:17:13.602 "traddr": "$NVMF_FIRST_TARGET_IP", 01:17:13.602 "adrfam": "ipv4", 01:17:13.602 "trsvcid": "$NVMF_PORT", 01:17:13.602 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:17:13.602 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:17:13.602 "hdgst": ${hdgst:-false}, 01:17:13.602 "ddgst": ${ddgst:-false} 01:17:13.602 }, 01:17:13.602 "method": "bdev_nvme_attach_controller" 01:17:13.602 } 01:17:13.602 EOF 01:17:13.602 )") 01:17:13.602 05:11:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 01:17:13.602 05:11:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 01:17:13.602 05:11:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 01:17:13.602 05:11:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:17:13.602 "params": { 01:17:13.602 "name": "Nvme0", 01:17:13.602 "trtype": "tcp", 01:17:13.602 "traddr": "10.0.0.3", 01:17:13.602 "adrfam": "ipv4", 01:17:13.602 "trsvcid": "4420", 01:17:13.602 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:17:13.602 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:17:13.602 "hdgst": false, 01:17:13.602 "ddgst": false 01:17:13.602 }, 01:17:13.602 "method": "bdev_nvme_attach_controller" 01:17:13.602 }' 01:17:13.602 [2024-12-09 05:11:55.702729] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:17:13.602 [2024-12-09 05:11:55.702852] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62575 ] 01:17:13.602 [2024-12-09 05:11:55.853903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:17:13.602 [2024-12-09 05:11:55.903190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:17:13.602 [2024-12-09 05:11:55.953672] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:17:13.859 Running I/O for 1 seconds... 01:17:14.794 1728.00 IOPS, 108.00 MiB/s 01:17:14.794 Latency(us) 01:17:14.794 [2024-12-09T05:11:57.250Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:17:14.794 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 01:17:14.794 Verification LBA range: start 0x0 length 0x400 01:17:14.794 Nvme0n1 : 1.01 1776.04 111.00 0.00 0.00 35419.23 3834.86 32968.33 01:17:14.794 [2024-12-09T05:11:57.250Z] =================================================================================================================== 01:17:14.794 [2024-12-09T05:11:57.250Z] Total : 1776.04 111.00 0.00 0.00 35419.23 3834.86 32968.33 01:17:15.054 05:11:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 01:17:15.054 05:11:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 01:17:15.054 05:11:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 01:17:15.054 05:11:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 01:17:15.054 05:11:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 01:17:15.054 05:11:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 01:17:15.054 05:11:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 01:17:15.054 05:11:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:17:15.054 05:11:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 01:17:15.054 05:11:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 01:17:15.054 05:11:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:17:15.054 rmmod nvme_tcp 01:17:15.054 rmmod nvme_fabrics 01:17:15.054 rmmod nvme_keyring 01:17:15.054 05:11:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:17:15.054 05:11:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 01:17:15.054 05:11:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 01:17:15.054 05:11:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 62477 ']' 01:17:15.054 05:11:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 62477 01:17:15.054 05:11:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 62477 ']' 01:17:15.054 05:11:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 62477 01:17:15.054 05:11:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 01:17:15.054 05:11:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:17:15.054 05:11:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62477 01:17:15.314 05:11:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:17:15.314 05:11:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:17:15.314 05:11:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62477' 01:17:15.314 killing process with pid 62477 01:17:15.314 05:11:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 62477 01:17:15.314 05:11:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 62477 01:17:15.573 [2024-12-09 05:11:57.831479] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 01:17:15.573 05:11:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:17:15.573 05:11:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:17:15.573 05:11:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:17:15.573 05:11:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 01:17:15.573 05:11:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 01:17:15.573 05:11:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:17:15.573 05:11:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 01:17:15.573 05:11:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:17:15.573 05:11:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:17:15.573 05:11:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:17:15.573 05:11:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:17:15.573 05:11:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:17:15.573 05:11:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:17:15.573 05:11:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:17:15.573 05:11:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:17:15.573 05:11:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:17:15.573 05:11:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:17:15.573 05:11:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:17:15.573 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:17:15.832 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:17:15.832 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:17:15.832 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:17:15.832 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 01:17:15.832 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:17:15.833 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:17:15.833 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:17:15.833 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 01:17:15.833 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 01:17:15.833 01:17:15.833 real 0m6.641s 01:17:15.833 user 0m23.420s 01:17:15.833 sys 0m1.707s 01:17:15.833 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 01:17:15.833 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:17:15.833 ************************************ 01:17:15.833 END TEST nvmf_host_management 01:17:15.833 ************************************ 01:17:15.833 05:11:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 01:17:15.833 05:11:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:17:15.833 05:11:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 01:17:15.833 05:11:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 01:17:15.833 ************************************ 01:17:15.833 START TEST nvmf_lvol 01:17:15.833 ************************************ 01:17:15.833 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 01:17:16.092 * Looking for test storage... 01:17:16.092 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:17:16.092 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:17:16.092 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 01:17:16.092 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:17:16.093 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:17:16.093 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:17:16.093 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 01:17:16.093 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 01:17:16.093 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 01:17:16.093 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 01:17:16.093 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 01:17:16.093 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 01:17:16.093 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 01:17:16.093 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 01:17:16.093 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 01:17:16.093 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:17:16.093 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 01:17:16.093 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 01:17:16.093 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 01:17:16.093 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:17:16.093 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 01:17:16.093 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 01:17:16.093 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:17:16.093 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 01:17:16.093 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 01:17:16.093 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 01:17:16.093 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 01:17:16.093 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:17:16.093 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 01:17:16.093 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 01:17:16.093 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:17:16.093 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:17:16.093 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 01:17:16.093 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:17:16.093 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:17:16.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:17:16.093 --rc genhtml_branch_coverage=1 01:17:16.093 --rc genhtml_function_coverage=1 01:17:16.093 --rc genhtml_legend=1 01:17:16.093 --rc geninfo_all_blocks=1 01:17:16.093 --rc geninfo_unexecuted_blocks=1 01:17:16.093 01:17:16.093 ' 01:17:16.093 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:17:16.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:17:16.093 --rc genhtml_branch_coverage=1 01:17:16.093 --rc genhtml_function_coverage=1 01:17:16.093 --rc genhtml_legend=1 01:17:16.093 --rc geninfo_all_blocks=1 01:17:16.093 --rc geninfo_unexecuted_blocks=1 01:17:16.093 01:17:16.093 ' 01:17:16.093 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:17:16.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:17:16.093 --rc genhtml_branch_coverage=1 01:17:16.093 --rc genhtml_function_coverage=1 01:17:16.093 --rc genhtml_legend=1 01:17:16.093 --rc geninfo_all_blocks=1 01:17:16.093 --rc geninfo_unexecuted_blocks=1 01:17:16.093 01:17:16.093 ' 01:17:16.093 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:17:16.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:17:16.093 --rc genhtml_branch_coverage=1 01:17:16.093 --rc genhtml_function_coverage=1 01:17:16.093 --rc genhtml_legend=1 01:17:16.093 --rc geninfo_all_blocks=1 01:17:16.093 --rc geninfo_unexecuted_blocks=1 01:17:16.093 01:17:16.093 ' 01:17:16.093 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:17:16.093 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 01:17:16.093 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:17:16.093 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:17:16.093 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:17:16.093 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:17:16.093 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:17:16.093 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:17:16.093 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:17:16.093 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:17:16.093 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:17:16.093 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:17:16.093 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:17:16.093 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:17:16.093 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:17:16.093 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:17:16.093 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:17:16.093 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:17:16.093 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:17:16.093 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 01:17:16.093 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:17:16.093 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:17:16.093 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:17:16.093 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:17:16.093 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:17:16.093 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:17:16.093 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 01:17:16.093 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:17:16.093 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 01:17:16.093 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:17:16.093 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:17:16.093 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:17:16.093 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:17:16.093 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:17:16.093 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:17:16.093 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:17:16.093 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:17:16.093 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:17:16.093 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 01:17:16.093 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 01:17:16.094 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:17:16.094 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 01:17:16.094 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 01:17:16.094 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:17:16.094 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 01:17:16.094 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:17:16.094 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:17:16.094 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 01:17:16.094 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 01:17:16.094 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 01:17:16.094 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:17:16.094 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:17:16.094 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:17:16.094 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:17:16.094 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:17:16.094 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:17:16.094 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:17:16.094 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:17:16.094 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 01:17:16.094 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:17:16.094 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:17:16.094 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:17:16.094 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:17:16.094 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:17:16.094 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:17:16.094 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:17:16.094 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:17:16.094 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:17:16.094 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:17:16.094 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:17:16.094 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:17:16.094 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:17:16.094 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:17:16.094 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:17:16.094 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:17:16.094 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:17:16.094 Cannot find device "nvmf_init_br" 01:17:16.094 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 01:17:16.094 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:17:16.094 Cannot find device "nvmf_init_br2" 01:17:16.094 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 01:17:16.094 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:17:16.353 Cannot find device "nvmf_tgt_br" 01:17:16.353 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 01:17:16.353 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:17:16.353 Cannot find device "nvmf_tgt_br2" 01:17:16.353 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 01:17:16.353 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:17:16.353 Cannot find device "nvmf_init_br" 01:17:16.353 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 01:17:16.353 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:17:16.353 Cannot find device "nvmf_init_br2" 01:17:16.353 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 01:17:16.353 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:17:16.353 Cannot find device "nvmf_tgt_br" 01:17:16.353 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 01:17:16.353 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:17:16.353 Cannot find device "nvmf_tgt_br2" 01:17:16.353 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 01:17:16.353 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:17:16.353 Cannot find device "nvmf_br" 01:17:16.353 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 01:17:16.353 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:17:16.353 Cannot find device "nvmf_init_if" 01:17:16.353 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 01:17:16.353 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:17:16.353 Cannot find device "nvmf_init_if2" 01:17:16.353 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 01:17:16.353 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:17:16.353 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:17:16.353 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 01:17:16.353 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:17:16.353 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:17:16.353 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 01:17:16.353 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:17:16.353 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:17:16.353 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:17:16.353 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:17:16.353 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:17:16.354 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:17:16.354 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:17:16.354 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:17:16.613 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:17:16.613 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:17:16.613 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:17:16.613 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:17:16.613 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:17:16.613 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:17:16.613 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:17:16.613 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:17:16.613 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:17:16.613 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:17:16.613 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:17:16.613 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:17:16.613 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:17:16.613 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:17:16.613 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:17:16.613 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:17:16.613 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:17:16.613 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:17:16.613 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:17:16.613 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:17:16.613 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:17:16.613 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:17:16.613 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:17:16.613 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:17:16.613 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:17:16.613 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:17:16.613 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.117 ms 01:17:16.613 01:17:16.613 --- 10.0.0.3 ping statistics --- 01:17:16.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:17:16.613 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 01:17:16.613 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:17:16.613 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:17:16.613 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.063 ms 01:17:16.613 01:17:16.613 --- 10.0.0.4 ping statistics --- 01:17:16.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:17:16.613 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 01:17:16.613 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:17:16.613 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:17:16.614 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 01:17:16.614 01:17:16.614 --- 10.0.0.1 ping statistics --- 01:17:16.614 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:17:16.614 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 01:17:16.614 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:17:16.614 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:17:16.614 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.110 ms 01:17:16.614 01:17:16.614 --- 10.0.0.2 ping statistics --- 01:17:16.614 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:17:16.614 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 01:17:16.614 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:17:16.614 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 01:17:16.614 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:17:16.614 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:17:16.614 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:17:16.614 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:17:16.614 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:17:16.614 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:17:16.614 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:17:16.614 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 01:17:16.614 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:17:16.614 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 01:17:16.614 05:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 01:17:16.614 05:11:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 01:17:16.614 05:11:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=62844 01:17:16.614 05:11:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 62844 01:17:16.614 05:11:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 62844 ']' 01:17:16.614 05:11:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:17:16.614 05:11:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 01:17:16.614 05:11:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:17:16.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:17:16.614 05:11:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 01:17:16.614 05:11:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 01:17:16.614 [2024-12-09 05:11:59.057872] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:17:16.614 [2024-12-09 05:11:59.057943] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:17:16.874 [2024-12-09 05:11:59.190675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 01:17:16.874 [2024-12-09 05:11:59.248246] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:17:16.874 [2024-12-09 05:11:59.248418] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:17:16.874 [2024-12-09 05:11:59.248464] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:17:16.874 [2024-12-09 05:11:59.248497] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:17:16.874 [2024-12-09 05:11:59.248519] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:17:16.874 [2024-12-09 05:11:59.249520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:17:16.874 [2024-12-09 05:11:59.249722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:17:16.874 [2024-12-09 05:11:59.249723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:17:16.874 [2024-12-09 05:11:59.293571] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:17:17.812 05:11:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:17:17.812 05:11:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 01:17:17.812 05:11:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:17:17.812 05:11:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 01:17:17.812 05:11:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 01:17:17.812 05:11:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:17:17.812 05:11:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 01:17:17.812 [2024-12-09 05:12:00.230334] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:17:17.812 05:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:17:18.071 05:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 01:17:18.071 05:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:17:18.331 05:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 01:17:18.331 05:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 01:17:18.591 05:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 01:17:18.851 05:12:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=b7d7abe9-017a-4ee5-b044-180e165635c2 01:17:18.851 05:12:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b7d7abe9-017a-4ee5-b044-180e165635c2 lvol 20 01:17:19.110 05:12:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=651de7bf-eb16-4b28-b8d9-07f4f840dd6d 01:17:19.110 05:12:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 01:17:19.369 05:12:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 651de7bf-eb16-4b28-b8d9-07f4f840dd6d 01:17:19.629 05:12:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 01:17:19.629 [2024-12-09 05:12:02.019448] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:17:19.629 05:12:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 01:17:19.888 05:12:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 01:17:19.888 05:12:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=62914 01:17:19.888 05:12:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 01:17:20.825 05:12:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 651de7bf-eb16-4b28-b8d9-07f4f840dd6d MY_SNAPSHOT 01:17:21.084 05:12:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=f5ad302e-8b96-408c-b669-9f5786c50332 01:17:21.084 05:12:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 651de7bf-eb16-4b28-b8d9-07f4f840dd6d 30 01:17:21.343 05:12:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone f5ad302e-8b96-408c-b669-9f5786c50332 MY_CLONE 01:17:21.625 05:12:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=068b00c7-6b3d-4d92-8123-620e91ac37dd 01:17:21.625 05:12:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 068b00c7-6b3d-4d92-8123-620e91ac37dd 01:17:22.192 05:12:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 62914 01:17:30.305 Initializing NVMe Controllers 01:17:30.305 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 01:17:30.305 Controller IO queue size 128, less than required. 01:17:30.305 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 01:17:30.305 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 01:17:30.305 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 01:17:30.305 Initialization complete. Launching workers. 01:17:30.305 ======================================================== 01:17:30.305 Latency(us) 01:17:30.305 Device Information : IOPS MiB/s Average min max 01:17:30.305 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10008.50 39.10 12798.43 2252.34 71282.52 01:17:30.305 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10240.10 40.00 12506.06 285.04 64685.14 01:17:30.305 ======================================================== 01:17:30.305 Total : 20248.59 79.10 12650.57 285.04 71282.52 01:17:30.305 01:17:30.305 05:12:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:17:30.566 05:12:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 651de7bf-eb16-4b28-b8d9-07f4f840dd6d 01:17:30.825 05:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b7d7abe9-017a-4ee5-b044-180e165635c2 01:17:30.825 05:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 01:17:31.085 05:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 01:17:31.085 05:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 01:17:31.085 05:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 01:17:31.085 05:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 01:17:31.085 05:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:17:31.085 05:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 01:17:31.085 05:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 01:17:31.085 05:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:17:31.085 rmmod nvme_tcp 01:17:31.085 rmmod nvme_fabrics 01:17:31.085 rmmod nvme_keyring 01:17:31.085 05:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:17:31.085 05:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 01:17:31.085 05:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 01:17:31.085 05:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 62844 ']' 01:17:31.085 05:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 62844 01:17:31.085 05:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 62844 ']' 01:17:31.085 05:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 62844 01:17:31.085 05:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 01:17:31.085 05:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:17:31.085 05:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62844 01:17:31.085 05:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:17:31.085 05:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:17:31.085 05:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62844' 01:17:31.085 killing process with pid 62844 01:17:31.085 05:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 62844 01:17:31.085 05:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 62844 01:17:31.345 05:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:17:31.345 05:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:17:31.345 05:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:17:31.345 05:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 01:17:31.345 05:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 01:17:31.345 05:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 01:17:31.345 05:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:17:31.345 05:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:17:31.345 05:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:17:31.345 05:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:17:31.345 05:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:17:31.345 05:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:17:31.345 05:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:17:31.345 05:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:17:31.345 05:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:17:31.345 05:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:17:31.345 05:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:17:31.605 05:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:17:31.605 05:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:17:31.605 05:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:17:31.605 05:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:17:31.605 05:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:17:31.605 05:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 01:17:31.605 05:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:17:31.605 05:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:17:31.605 05:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:17:31.605 05:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 01:17:31.605 01:17:31.605 real 0m15.754s 01:17:31.605 user 1m4.183s 01:17:31.605 sys 0m3.800s 01:17:31.605 05:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 01:17:31.605 05:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 01:17:31.605 ************************************ 01:17:31.605 END TEST nvmf_lvol 01:17:31.605 ************************************ 01:17:31.605 05:12:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 01:17:31.605 05:12:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:17:31.605 05:12:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 01:17:31.605 05:12:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 01:17:31.605 ************************************ 01:17:31.605 START TEST nvmf_lvs_grow 01:17:31.605 ************************************ 01:17:31.605 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 01:17:31.866 * Looking for test storage... 01:17:31.866 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:17:31.866 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:17:31.866 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 01:17:31.866 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:17:31.866 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:17:31.866 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:17:31.866 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 01:17:31.866 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 01:17:31.866 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 01:17:31.866 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 01:17:31.866 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 01:17:31.866 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 01:17:31.866 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 01:17:31.866 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 01:17:31.866 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 01:17:31.866 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:17:31.866 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 01:17:31.866 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 01:17:31.866 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 01:17:31.866 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:17:31.866 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 01:17:31.866 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 01:17:31.866 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:17:31.866 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 01:17:31.866 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 01:17:31.866 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 01:17:31.866 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 01:17:31.866 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:17:31.866 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 01:17:31.866 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 01:17:31.866 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:17:31.866 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:17:31.866 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 01:17:31.866 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:17:31.866 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:17:31.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:17:31.866 --rc genhtml_branch_coverage=1 01:17:31.866 --rc genhtml_function_coverage=1 01:17:31.866 --rc genhtml_legend=1 01:17:31.866 --rc geninfo_all_blocks=1 01:17:31.866 --rc geninfo_unexecuted_blocks=1 01:17:31.866 01:17:31.866 ' 01:17:31.866 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:17:31.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:17:31.866 --rc genhtml_branch_coverage=1 01:17:31.866 --rc genhtml_function_coverage=1 01:17:31.866 --rc genhtml_legend=1 01:17:31.866 --rc geninfo_all_blocks=1 01:17:31.866 --rc geninfo_unexecuted_blocks=1 01:17:31.866 01:17:31.866 ' 01:17:31.866 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:17:31.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:17:31.866 --rc genhtml_branch_coverage=1 01:17:31.866 --rc genhtml_function_coverage=1 01:17:31.866 --rc genhtml_legend=1 01:17:31.866 --rc geninfo_all_blocks=1 01:17:31.866 --rc geninfo_unexecuted_blocks=1 01:17:31.866 01:17:31.866 ' 01:17:31.866 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:17:31.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:17:31.866 --rc genhtml_branch_coverage=1 01:17:31.866 --rc genhtml_function_coverage=1 01:17:31.866 --rc genhtml_legend=1 01:17:31.866 --rc geninfo_all_blocks=1 01:17:31.866 --rc geninfo_unexecuted_blocks=1 01:17:31.866 01:17:31.866 ' 01:17:31.866 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:17:31.866 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 01:17:31.866 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:17:31.866 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:17:31.866 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:17:31.866 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:17:31.866 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:17:31.866 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:17:31.866 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:17:31.866 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:17:31.867 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:17:31.867 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:17:31.867 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:17:31.867 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:17:31.867 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:17:31.867 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:17:31.867 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:17:31.867 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:17:31.867 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:17:31.867 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 01:17:31.867 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:17:31.867 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:17:31.867 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:17:31.867 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:17:31.867 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:17:31.867 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:17:31.867 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 01:17:31.867 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:17:31.867 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 01:17:31.867 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:17:31.867 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:17:31.867 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:17:31.867 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:17:31.867 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:17:31.867 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:17:31.867 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:17:31.867 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:17:31.867 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:17:31.867 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 01:17:31.867 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:17:31.867 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:17:31.867 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 01:17:31.867 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:17:31.867 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:17:31.867 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 01:17:31.867 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 01:17:31.867 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 01:17:31.867 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:17:31.867 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:17:31.867 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:17:31.867 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:17:31.867 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:17:31.867 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:17:31.867 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:17:31.867 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:17:31.867 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 01:17:31.867 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:17:31.867 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:17:31.867 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:17:31.867 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:17:31.867 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:17:31.867 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:17:31.867 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:17:31.867 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:17:31.867 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:17:31.867 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:17:31.867 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:17:31.867 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:17:31.867 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:17:31.867 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:17:31.867 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:17:31.867 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:17:31.867 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:17:31.867 Cannot find device "nvmf_init_br" 01:17:31.867 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 01:17:31.867 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:17:31.867 Cannot find device "nvmf_init_br2" 01:17:31.867 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 01:17:31.867 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:17:31.867 Cannot find device "nvmf_tgt_br" 01:17:31.867 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 01:17:31.867 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:17:31.867 Cannot find device "nvmf_tgt_br2" 01:17:31.867 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 01:17:31.867 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:17:31.867 Cannot find device "nvmf_init_br" 01:17:32.128 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 01:17:32.128 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:17:32.128 Cannot find device "nvmf_init_br2" 01:17:32.128 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 01:17:32.128 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:17:32.128 Cannot find device "nvmf_tgt_br" 01:17:32.128 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 01:17:32.128 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:17:32.128 Cannot find device "nvmf_tgt_br2" 01:17:32.128 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 01:17:32.128 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:17:32.128 Cannot find device "nvmf_br" 01:17:32.128 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 01:17:32.128 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:17:32.128 Cannot find device "nvmf_init_if" 01:17:32.128 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 01:17:32.128 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:17:32.128 Cannot find device "nvmf_init_if2" 01:17:32.128 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 01:17:32.128 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:17:32.128 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:17:32.128 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 01:17:32.128 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:17:32.128 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:17:32.128 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 01:17:32.128 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:17:32.128 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:17:32.128 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:17:32.128 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:17:32.128 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:17:32.128 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:17:32.128 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:17:32.128 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:17:32.128 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:17:32.128 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:17:32.128 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:17:32.128 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:17:32.128 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:17:32.128 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:17:32.128 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:17:32.128 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:17:32.128 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:17:32.128 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:17:32.128 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:17:32.128 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:17:32.128 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:17:32.128 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:17:32.128 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:17:32.128 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:17:32.388 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:17:32.388 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:17:32.388 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:17:32.388 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:17:32.388 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:17:32.388 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:17:32.388 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:17:32.388 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:17:32.388 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:17:32.388 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:17:32.388 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.139 ms 01:17:32.388 01:17:32.388 --- 10.0.0.3 ping statistics --- 01:17:32.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:17:32.388 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 01:17:32.388 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:17:32.388 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:17:32.388 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.093 ms 01:17:32.388 01:17:32.388 --- 10.0.0.4 ping statistics --- 01:17:32.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:17:32.388 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 01:17:32.389 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:17:32.389 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:17:32.389 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 01:17:32.389 01:17:32.389 --- 10.0.0.1 ping statistics --- 01:17:32.389 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:17:32.389 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 01:17:32.389 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:17:32.389 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:17:32.389 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 01:17:32.389 01:17:32.389 --- 10.0.0.2 ping statistics --- 01:17:32.389 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:17:32.389 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 01:17:32.389 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:17:32.389 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 01:17:32.389 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:17:32.389 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:17:32.389 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:17:32.389 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:17:32.389 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:17:32.389 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:17:32.389 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:17:32.389 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 01:17:32.389 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:17:32.389 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 01:17:32.389 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 01:17:32.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:17:32.389 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=63296 01:17:32.389 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 63296 01:17:32.389 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 63296 ']' 01:17:32.389 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:17:32.389 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 01:17:32.389 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:17:32.389 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 01:17:32.389 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 01:17:32.389 05:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 01:17:32.389 [2024-12-09 05:12:14.745145] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:17:32.389 [2024-12-09 05:12:14.745242] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:17:32.649 [2024-12-09 05:12:14.878749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:17:32.649 [2024-12-09 05:12:14.924473] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:17:32.649 [2024-12-09 05:12:14.924601] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:17:32.649 [2024-12-09 05:12:14.924651] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:17:32.649 [2024-12-09 05:12:14.924677] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:17:32.649 [2024-12-09 05:12:14.924693] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:17:32.649 [2024-12-09 05:12:14.925043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:17:32.649 [2024-12-09 05:12:14.965125] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:17:33.216 05:12:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:17:33.216 05:12:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 01:17:33.216 05:12:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:17:33.216 05:12:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 01:17:33.216 05:12:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 01:17:33.216 05:12:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:17:33.216 05:12:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 01:17:33.476 [2024-12-09 05:12:15.845599] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:17:33.476 05:12:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 01:17:33.476 05:12:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:17:33.476 05:12:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 01:17:33.476 05:12:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 01:17:33.476 ************************************ 01:17:33.476 START TEST lvs_grow_clean 01:17:33.476 ************************************ 01:17:33.476 05:12:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 01:17:33.476 05:12:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 01:17:33.476 05:12:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 01:17:33.476 05:12:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 01:17:33.476 05:12:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 01:17:33.476 05:12:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 01:17:33.476 05:12:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 01:17:33.476 05:12:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 01:17:33.476 05:12:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 01:17:33.476 05:12:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 01:17:33.734 05:12:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 01:17:33.734 05:12:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 01:17:33.993 05:12:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=41ce3f8c-9ed8-4d5f-a049-75069200860f 01:17:33.993 05:12:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 41ce3f8c-9ed8-4d5f-a049-75069200860f 01:17:33.993 05:12:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 01:17:34.251 05:12:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 01:17:34.251 05:12:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 01:17:34.251 05:12:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 41ce3f8c-9ed8-4d5f-a049-75069200860f lvol 150 01:17:34.509 05:12:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=23412e25-d3d5-448f-bf12-ac2c907763aa 01:17:34.509 05:12:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 01:17:34.509 05:12:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 01:17:34.767 [2024-12-09 05:12:16.982224] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 01:17:34.767 [2024-12-09 05:12:16.982305] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 01:17:34.767 true 01:17:34.767 05:12:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 41ce3f8c-9ed8-4d5f-a049-75069200860f 01:17:34.767 05:12:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 01:17:35.024 05:12:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 01:17:35.024 05:12:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 01:17:35.024 05:12:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 23412e25-d3d5-448f-bf12-ac2c907763aa 01:17:35.281 05:12:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 01:17:35.540 [2024-12-09 05:12:17.837035] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:17:35.540 05:12:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 01:17:35.799 05:12:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63373 01:17:35.799 05:12:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 01:17:35.799 05:12:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:17:35.799 05:12:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63373 /var/tmp/bdevperf.sock 01:17:35.799 05:12:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 63373 ']' 01:17:35.799 05:12:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:17:35.799 05:12:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 01:17:35.799 05:12:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:17:35.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:17:35.799 05:12:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 01:17:35.799 05:12:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 01:17:35.799 [2024-12-09 05:12:18.127109] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:17:35.799 [2024-12-09 05:12:18.127272] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63373 ] 01:17:36.056 [2024-12-09 05:12:18.278226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:17:36.056 [2024-12-09 05:12:18.328921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:17:36.056 [2024-12-09 05:12:18.369958] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:17:36.623 05:12:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:17:36.623 05:12:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 01:17:36.623 05:12:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 01:17:36.881 Nvme0n1 01:17:36.881 05:12:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 01:17:37.141 [ 01:17:37.141 { 01:17:37.141 "name": "Nvme0n1", 01:17:37.141 "aliases": [ 01:17:37.141 "23412e25-d3d5-448f-bf12-ac2c907763aa" 01:17:37.141 ], 01:17:37.141 "product_name": "NVMe disk", 01:17:37.141 "block_size": 4096, 01:17:37.141 "num_blocks": 38912, 01:17:37.141 "uuid": "23412e25-d3d5-448f-bf12-ac2c907763aa", 01:17:37.141 "numa_id": -1, 01:17:37.141 "assigned_rate_limits": { 01:17:37.141 "rw_ios_per_sec": 0, 01:17:37.141 "rw_mbytes_per_sec": 0, 01:17:37.141 "r_mbytes_per_sec": 0, 01:17:37.141 "w_mbytes_per_sec": 0 01:17:37.141 }, 01:17:37.141 "claimed": false, 01:17:37.141 "zoned": false, 01:17:37.141 "supported_io_types": { 01:17:37.141 "read": true, 01:17:37.141 "write": true, 01:17:37.141 "unmap": true, 01:17:37.141 "flush": true, 01:17:37.141 "reset": true, 01:17:37.141 "nvme_admin": true, 01:17:37.141 "nvme_io": true, 01:17:37.141 "nvme_io_md": false, 01:17:37.141 "write_zeroes": true, 01:17:37.141 "zcopy": false, 01:17:37.141 "get_zone_info": false, 01:17:37.141 "zone_management": false, 01:17:37.141 "zone_append": false, 01:17:37.141 "compare": true, 01:17:37.141 "compare_and_write": true, 01:17:37.141 "abort": true, 01:17:37.141 "seek_hole": false, 01:17:37.141 "seek_data": false, 01:17:37.141 "copy": true, 01:17:37.141 "nvme_iov_md": false 01:17:37.141 }, 01:17:37.141 "memory_domains": [ 01:17:37.141 { 01:17:37.141 "dma_device_id": "system", 01:17:37.141 "dma_device_type": 1 01:17:37.141 } 01:17:37.141 ], 01:17:37.141 "driver_specific": { 01:17:37.141 "nvme": [ 01:17:37.141 { 01:17:37.141 "trid": { 01:17:37.141 "trtype": "TCP", 01:17:37.141 "adrfam": "IPv4", 01:17:37.141 "traddr": "10.0.0.3", 01:17:37.141 "trsvcid": "4420", 01:17:37.141 "subnqn": "nqn.2016-06.io.spdk:cnode0" 01:17:37.141 }, 01:17:37.141 "ctrlr_data": { 01:17:37.141 "cntlid": 1, 01:17:37.141 "vendor_id": "0x8086", 01:17:37.141 "model_number": "SPDK bdev Controller", 01:17:37.141 "serial_number": "SPDK0", 01:17:37.141 "firmware_revision": "25.01", 01:17:37.141 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:17:37.141 "oacs": { 01:17:37.141 "security": 0, 01:17:37.141 "format": 0, 01:17:37.141 "firmware": 0, 01:17:37.141 "ns_manage": 0 01:17:37.141 }, 01:17:37.141 "multi_ctrlr": true, 01:17:37.141 "ana_reporting": false 01:17:37.141 }, 01:17:37.141 "vs": { 01:17:37.141 "nvme_version": "1.3" 01:17:37.141 }, 01:17:37.141 "ns_data": { 01:17:37.141 "id": 1, 01:17:37.141 "can_share": true 01:17:37.141 } 01:17:37.141 } 01:17:37.141 ], 01:17:37.141 "mp_policy": "active_passive" 01:17:37.141 } 01:17:37.141 } 01:17:37.141 ] 01:17:37.141 05:12:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63397 01:17:37.141 05:12:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 01:17:37.141 05:12:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:17:37.401 Running I/O for 10 seconds... 01:17:38.341 Latency(us) 01:17:38.341 [2024-12-09T05:12:20.797Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:17:38.341 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:17:38.341 Nvme0n1 : 1.00 8523.00 33.29 0.00 0.00 0.00 0.00 0.00 01:17:38.341 [2024-12-09T05:12:20.797Z] =================================================================================================================== 01:17:38.341 [2024-12-09T05:12:20.797Z] Total : 8523.00 33.29 0.00 0.00 0.00 0.00 0.00 01:17:38.341 01:17:39.281 05:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 41ce3f8c-9ed8-4d5f-a049-75069200860f 01:17:39.281 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:17:39.281 Nvme0n1 : 2.00 8833.50 34.51 0.00 0.00 0.00 0.00 0.00 01:17:39.281 [2024-12-09T05:12:21.737Z] =================================================================================================================== 01:17:39.281 [2024-12-09T05:12:21.737Z] Total : 8833.50 34.51 0.00 0.00 0.00 0.00 0.00 01:17:39.281 01:17:39.541 true 01:17:39.541 05:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 41ce3f8c-9ed8-4d5f-a049-75069200860f 01:17:39.541 05:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 01:17:39.801 05:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 01:17:39.801 05:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 01:17:39.801 05:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 63397 01:17:40.370 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:17:40.370 Nvme0n1 : 3.00 8852.33 34.58 0.00 0.00 0.00 0.00 0.00 01:17:40.370 [2024-12-09T05:12:22.826Z] =================================================================================================================== 01:17:40.370 [2024-12-09T05:12:22.826Z] Total : 8852.33 34.58 0.00 0.00 0.00 0.00 0.00 01:17:40.370 01:17:41.307 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:17:41.308 Nvme0n1 : 4.00 8513.00 33.25 0.00 0.00 0.00 0.00 0.00 01:17:41.308 [2024-12-09T05:12:23.764Z] =================================================================================================================== 01:17:41.308 [2024-12-09T05:12:23.764Z] Total : 8513.00 33.25 0.00 0.00 0.00 0.00 0.00 01:17:41.308 01:17:42.244 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:17:42.244 Nvme0n1 : 5.00 8314.00 32.48 0.00 0.00 0.00 0.00 0.00 01:17:42.244 [2024-12-09T05:12:24.700Z] =================================================================================================================== 01:17:42.244 [2024-12-09T05:12:24.700Z] Total : 8314.00 32.48 0.00 0.00 0.00 0.00 0.00 01:17:42.244 01:17:43.618 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:17:43.618 Nvme0n1 : 6.00 8367.67 32.69 0.00 0.00 0.00 0.00 0.00 01:17:43.618 [2024-12-09T05:12:26.074Z] =================================================================================================================== 01:17:43.618 [2024-12-09T05:12:26.074Z] Total : 8367.67 32.69 0.00 0.00 0.00 0.00 0.00 01:17:43.618 01:17:44.556 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:17:44.556 Nvme0n1 : 7.00 8387.86 32.77 0.00 0.00 0.00 0.00 0.00 01:17:44.556 [2024-12-09T05:12:27.012Z] =================================================================================================================== 01:17:44.556 [2024-12-09T05:12:27.012Z] Total : 8387.86 32.77 0.00 0.00 0.00 0.00 0.00 01:17:44.556 01:17:45.495 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:17:45.495 Nvme0n1 : 8.00 8403.00 32.82 0.00 0.00 0.00 0.00 0.00 01:17:45.495 [2024-12-09T05:12:27.951Z] =================================================================================================================== 01:17:45.495 [2024-12-09T05:12:27.951Z] Total : 8403.00 32.82 0.00 0.00 0.00 0.00 0.00 01:17:45.495 01:17:46.435 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:17:46.435 Nvme0n1 : 9.00 8400.67 32.82 0.00 0.00 0.00 0.00 0.00 01:17:46.435 [2024-12-09T05:12:28.891Z] =================================================================================================================== 01:17:46.435 [2024-12-09T05:12:28.891Z] Total : 8400.67 32.82 0.00 0.00 0.00 0.00 0.00 01:17:46.435 01:17:47.375 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:17:47.375 Nvme0n1 : 10.00 8373.40 32.71 0.00 0.00 0.00 0.00 0.00 01:17:47.375 [2024-12-09T05:12:29.831Z] =================================================================================================================== 01:17:47.375 [2024-12-09T05:12:29.831Z] Total : 8373.40 32.71 0.00 0.00 0.00 0.00 0.00 01:17:47.375 01:17:47.375 01:17:47.375 Latency(us) 01:17:47.375 [2024-12-09T05:12:29.831Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:17:47.375 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:17:47.375 Nvme0n1 : 10.00 8383.87 32.75 0.00 0.00 15263.13 7440.77 285725.51 01:17:47.375 [2024-12-09T05:12:29.831Z] =================================================================================================================== 01:17:47.375 [2024-12-09T05:12:29.831Z] Total : 8383.87 32.75 0.00 0.00 15263.13 7440.77 285725.51 01:17:47.375 { 01:17:47.375 "results": [ 01:17:47.375 { 01:17:47.375 "job": "Nvme0n1", 01:17:47.375 "core_mask": "0x2", 01:17:47.375 "workload": "randwrite", 01:17:47.375 "status": "finished", 01:17:47.375 "queue_depth": 128, 01:17:47.375 "io_size": 4096, 01:17:47.375 "runtime": 10.002774, 01:17:47.375 "iops": 8383.8743132655, 01:17:47.375 "mibps": 32.74950903619336, 01:17:47.375 "io_failed": 0, 01:17:47.375 "io_timeout": 0, 01:17:47.375 "avg_latency_us": 15263.125571632083, 01:17:47.375 "min_latency_us": 7440.768558951965, 01:17:47.375 "max_latency_us": 285725.51266375545 01:17:47.375 } 01:17:47.375 ], 01:17:47.375 "core_count": 1 01:17:47.375 } 01:17:47.375 05:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63373 01:17:47.375 05:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 63373 ']' 01:17:47.375 05:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 63373 01:17:47.375 05:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 01:17:47.375 05:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:17:47.375 05:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63373 01:17:47.375 killing process with pid 63373 01:17:47.375 Received shutdown signal, test time was about 10.000000 seconds 01:17:47.375 01:17:47.375 Latency(us) 01:17:47.375 [2024-12-09T05:12:29.831Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:17:47.375 [2024-12-09T05:12:29.831Z] =================================================================================================================== 01:17:47.375 [2024-12-09T05:12:29.831Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:17:47.375 05:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:17:47.375 05:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:17:47.375 05:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63373' 01:17:47.375 05:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 63373 01:17:47.375 05:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 63373 01:17:47.634 05:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 01:17:47.893 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:17:48.153 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 41ce3f8c-9ed8-4d5f-a049-75069200860f 01:17:48.153 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 01:17:48.412 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 01:17:48.412 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 01:17:48.412 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 01:17:48.412 [2024-12-09 05:12:30.814963] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 01:17:48.412 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 41ce3f8c-9ed8-4d5f-a049-75069200860f 01:17:48.413 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 01:17:48.413 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 41ce3f8c-9ed8-4d5f-a049-75069200860f 01:17:48.413 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:17:48.413 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:17:48.413 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:17:48.413 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:17:48.413 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:17:48.413 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:17:48.413 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:17:48.413 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 01:17:48.413 05:12:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 41ce3f8c-9ed8-4d5f-a049-75069200860f 01:17:48.692 request: 01:17:48.692 { 01:17:48.692 "uuid": "41ce3f8c-9ed8-4d5f-a049-75069200860f", 01:17:48.692 "method": "bdev_lvol_get_lvstores", 01:17:48.692 "req_id": 1 01:17:48.692 } 01:17:48.692 Got JSON-RPC error response 01:17:48.692 response: 01:17:48.692 { 01:17:48.692 "code": -19, 01:17:48.692 "message": "No such device" 01:17:48.692 } 01:17:48.693 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 01:17:48.693 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:17:48.693 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:17:48.693 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:17:48.693 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 01:17:48.952 aio_bdev 01:17:48.952 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 23412e25-d3d5-448f-bf12-ac2c907763aa 01:17:48.952 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=23412e25-d3d5-448f-bf12-ac2c907763aa 01:17:48.952 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:17:48.952 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 01:17:48.952 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:17:48.952 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:17:48.952 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 01:17:49.211 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 23412e25-d3d5-448f-bf12-ac2c907763aa -t 2000 01:17:49.471 [ 01:17:49.471 { 01:17:49.471 "name": "23412e25-d3d5-448f-bf12-ac2c907763aa", 01:17:49.471 "aliases": [ 01:17:49.471 "lvs/lvol" 01:17:49.471 ], 01:17:49.471 "product_name": "Logical Volume", 01:17:49.471 "block_size": 4096, 01:17:49.471 "num_blocks": 38912, 01:17:49.471 "uuid": "23412e25-d3d5-448f-bf12-ac2c907763aa", 01:17:49.471 "assigned_rate_limits": { 01:17:49.471 "rw_ios_per_sec": 0, 01:17:49.471 "rw_mbytes_per_sec": 0, 01:17:49.471 "r_mbytes_per_sec": 0, 01:17:49.471 "w_mbytes_per_sec": 0 01:17:49.471 }, 01:17:49.471 "claimed": false, 01:17:49.471 "zoned": false, 01:17:49.471 "supported_io_types": { 01:17:49.471 "read": true, 01:17:49.471 "write": true, 01:17:49.471 "unmap": true, 01:17:49.471 "flush": false, 01:17:49.471 "reset": true, 01:17:49.471 "nvme_admin": false, 01:17:49.471 "nvme_io": false, 01:17:49.471 "nvme_io_md": false, 01:17:49.471 "write_zeroes": true, 01:17:49.471 "zcopy": false, 01:17:49.471 "get_zone_info": false, 01:17:49.471 "zone_management": false, 01:17:49.471 "zone_append": false, 01:17:49.471 "compare": false, 01:17:49.471 "compare_and_write": false, 01:17:49.471 "abort": false, 01:17:49.471 "seek_hole": true, 01:17:49.471 "seek_data": true, 01:17:49.471 "copy": false, 01:17:49.471 "nvme_iov_md": false 01:17:49.471 }, 01:17:49.471 "driver_specific": { 01:17:49.471 "lvol": { 01:17:49.471 "lvol_store_uuid": "41ce3f8c-9ed8-4d5f-a049-75069200860f", 01:17:49.471 "base_bdev": "aio_bdev", 01:17:49.471 "thin_provision": false, 01:17:49.471 "num_allocated_clusters": 38, 01:17:49.471 "snapshot": false, 01:17:49.471 "clone": false, 01:17:49.471 "esnap_clone": false 01:17:49.471 } 01:17:49.471 } 01:17:49.471 } 01:17:49.471 ] 01:17:49.471 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 01:17:49.471 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 01:17:49.471 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 41ce3f8c-9ed8-4d5f-a049-75069200860f 01:17:49.471 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 01:17:49.471 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 41ce3f8c-9ed8-4d5f-a049-75069200860f 01:17:49.471 05:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 01:17:49.731 05:12:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 01:17:49.731 05:12:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 23412e25-d3d5-448f-bf12-ac2c907763aa 01:17:49.990 05:12:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 41ce3f8c-9ed8-4d5f-a049-75069200860f 01:17:50.249 05:12:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 01:17:50.508 05:12:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 01:17:50.798 01:17:50.798 real 0m17.285s 01:17:50.798 user 0m16.207s 01:17:50.798 sys 0m2.352s 01:17:50.798 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 01:17:50.798 ************************************ 01:17:50.798 END TEST lvs_grow_clean 01:17:50.798 ************************************ 01:17:50.798 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 01:17:50.798 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 01:17:50.798 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:17:50.798 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 01:17:50.798 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 01:17:50.798 ************************************ 01:17:50.798 START TEST lvs_grow_dirty 01:17:50.798 ************************************ 01:17:50.798 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 01:17:50.798 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 01:17:50.799 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 01:17:50.799 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 01:17:50.799 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 01:17:50.799 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 01:17:50.799 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 01:17:50.799 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 01:17:50.799 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 01:17:50.799 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 01:17:51.068 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 01:17:51.068 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 01:17:51.328 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=cff54f30-757c-4005-9091-405ce32b4f5f 01:17:51.328 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cff54f30-757c-4005-9091-405ce32b4f5f 01:17:51.328 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 01:17:51.586 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 01:17:51.586 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 01:17:51.586 05:12:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u cff54f30-757c-4005-9091-405ce32b4f5f lvol 150 01:17:51.845 05:12:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=63b1bb45-58af-4b2b-82a1-80f0e2758b51 01:17:51.845 05:12:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 01:17:51.845 05:12:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 01:17:52.104 [2024-12-09 05:12:34.321264] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 01:17:52.104 [2024-12-09 05:12:34.321450] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 01:17:52.104 true 01:17:52.104 05:12:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cff54f30-757c-4005-9091-405ce32b4f5f 01:17:52.104 05:12:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 01:17:52.104 05:12:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 01:17:52.104 05:12:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 01:17:52.362 05:12:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 63b1bb45-58af-4b2b-82a1-80f0e2758b51 01:17:52.627 05:12:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 01:17:52.902 [2024-12-09 05:12:35.184002] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:17:52.902 05:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 01:17:53.163 05:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 01:17:53.163 05:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63635 01:17:53.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:17:53.163 05:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:17:53.163 05:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63635 /var/tmp/bdevperf.sock 01:17:53.163 05:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 63635 ']' 01:17:53.163 05:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:17:53.163 05:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 01:17:53.163 05:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:17:53.163 05:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 01:17:53.163 05:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 01:17:53.163 [2024-12-09 05:12:35.437576] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:17:53.163 [2024-12-09 05:12:35.437732] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63635 ] 01:17:53.163 [2024-12-09 05:12:35.573730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:17:53.420 [2024-12-09 05:12:35.650569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:17:53.420 [2024-12-09 05:12:35.696207] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:17:53.986 05:12:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:17:53.986 05:12:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 01:17:53.986 05:12:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 01:17:54.243 Nvme0n1 01:17:54.243 05:12:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 01:17:54.500 [ 01:17:54.500 { 01:17:54.500 "name": "Nvme0n1", 01:17:54.500 "aliases": [ 01:17:54.500 "63b1bb45-58af-4b2b-82a1-80f0e2758b51" 01:17:54.500 ], 01:17:54.500 "product_name": "NVMe disk", 01:17:54.500 "block_size": 4096, 01:17:54.500 "num_blocks": 38912, 01:17:54.500 "uuid": "63b1bb45-58af-4b2b-82a1-80f0e2758b51", 01:17:54.500 "numa_id": -1, 01:17:54.500 "assigned_rate_limits": { 01:17:54.500 "rw_ios_per_sec": 0, 01:17:54.500 "rw_mbytes_per_sec": 0, 01:17:54.500 "r_mbytes_per_sec": 0, 01:17:54.500 "w_mbytes_per_sec": 0 01:17:54.500 }, 01:17:54.500 "claimed": false, 01:17:54.500 "zoned": false, 01:17:54.500 "supported_io_types": { 01:17:54.500 "read": true, 01:17:54.500 "write": true, 01:17:54.500 "unmap": true, 01:17:54.500 "flush": true, 01:17:54.500 "reset": true, 01:17:54.500 "nvme_admin": true, 01:17:54.500 "nvme_io": true, 01:17:54.500 "nvme_io_md": false, 01:17:54.500 "write_zeroes": true, 01:17:54.500 "zcopy": false, 01:17:54.500 "get_zone_info": false, 01:17:54.500 "zone_management": false, 01:17:54.500 "zone_append": false, 01:17:54.500 "compare": true, 01:17:54.500 "compare_and_write": true, 01:17:54.500 "abort": true, 01:17:54.500 "seek_hole": false, 01:17:54.500 "seek_data": false, 01:17:54.500 "copy": true, 01:17:54.500 "nvme_iov_md": false 01:17:54.500 }, 01:17:54.500 "memory_domains": [ 01:17:54.500 { 01:17:54.500 "dma_device_id": "system", 01:17:54.500 "dma_device_type": 1 01:17:54.500 } 01:17:54.500 ], 01:17:54.500 "driver_specific": { 01:17:54.500 "nvme": [ 01:17:54.501 { 01:17:54.501 "trid": { 01:17:54.501 "trtype": "TCP", 01:17:54.501 "adrfam": "IPv4", 01:17:54.501 "traddr": "10.0.0.3", 01:17:54.501 "trsvcid": "4420", 01:17:54.501 "subnqn": "nqn.2016-06.io.spdk:cnode0" 01:17:54.501 }, 01:17:54.501 "ctrlr_data": { 01:17:54.501 "cntlid": 1, 01:17:54.501 "vendor_id": "0x8086", 01:17:54.501 "model_number": "SPDK bdev Controller", 01:17:54.501 "serial_number": "SPDK0", 01:17:54.501 "firmware_revision": "25.01", 01:17:54.501 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:17:54.501 "oacs": { 01:17:54.501 "security": 0, 01:17:54.501 "format": 0, 01:17:54.501 "firmware": 0, 01:17:54.501 "ns_manage": 0 01:17:54.501 }, 01:17:54.501 "multi_ctrlr": true, 01:17:54.501 "ana_reporting": false 01:17:54.501 }, 01:17:54.501 "vs": { 01:17:54.501 "nvme_version": "1.3" 01:17:54.501 }, 01:17:54.501 "ns_data": { 01:17:54.501 "id": 1, 01:17:54.501 "can_share": true 01:17:54.501 } 01:17:54.501 } 01:17:54.501 ], 01:17:54.501 "mp_policy": "active_passive" 01:17:54.501 } 01:17:54.501 } 01:17:54.501 ] 01:17:54.501 05:12:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63659 01:17:54.501 05:12:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 01:17:54.501 05:12:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:17:54.757 Running I/O for 10 seconds... 01:17:55.690 Latency(us) 01:17:55.690 [2024-12-09T05:12:38.146Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:17:55.690 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:17:55.690 Nvme0n1 : 1.00 9398.00 36.71 0.00 0.00 0.00 0.00 0.00 01:17:55.690 [2024-12-09T05:12:38.146Z] =================================================================================================================== 01:17:55.690 [2024-12-09T05:12:38.146Z] Total : 9398.00 36.71 0.00 0.00 0.00 0.00 0.00 01:17:55.690 01:17:56.628 05:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u cff54f30-757c-4005-9091-405ce32b4f5f 01:17:56.628 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:17:56.628 Nvme0n1 : 2.00 9268.50 36.21 0.00 0.00 0.00 0.00 0.00 01:17:56.628 [2024-12-09T05:12:39.084Z] =================================================================================================================== 01:17:56.628 [2024-12-09T05:12:39.084Z] Total : 9268.50 36.21 0.00 0.00 0.00 0.00 0.00 01:17:56.628 01:17:56.886 true 01:17:56.887 05:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cff54f30-757c-4005-9091-405ce32b4f5f 01:17:56.887 05:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 01:17:57.172 05:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 01:17:57.172 05:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 01:17:57.172 05:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 63659 01:17:57.784 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:17:57.784 Nvme0n1 : 3.00 9142.33 35.71 0.00 0.00 0.00 0.00 0.00 01:17:57.784 [2024-12-09T05:12:40.240Z] =================================================================================================================== 01:17:57.784 [2024-12-09T05:12:40.240Z] Total : 9142.33 35.71 0.00 0.00 0.00 0.00 0.00 01:17:57.784 01:17:58.720 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:17:58.720 Nvme0n1 : 4.00 9047.50 35.34 0.00 0.00 0.00 0.00 0.00 01:17:58.720 [2024-12-09T05:12:41.176Z] =================================================================================================================== 01:17:58.720 [2024-12-09T05:12:41.176Z] Total : 9047.50 35.34 0.00 0.00 0.00 0.00 0.00 01:17:58.721 01:17:59.656 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:17:59.656 Nvme0n1 : 5.00 8965.20 35.02 0.00 0.00 0.00 0.00 0.00 01:17:59.656 [2024-12-09T05:12:42.112Z] =================================================================================================================== 01:17:59.656 [2024-12-09T05:12:42.112Z] Total : 8965.20 35.02 0.00 0.00 0.00 0.00 0.00 01:17:59.656 01:18:00.593 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:18:00.593 Nvme0n1 : 6.00 8931.50 34.89 0.00 0.00 0.00 0.00 0.00 01:18:00.593 [2024-12-09T05:12:43.049Z] =================================================================================================================== 01:18:00.593 [2024-12-09T05:12:43.049Z] Total : 8931.50 34.89 0.00 0.00 0.00 0.00 0.00 01:18:00.593 01:18:02.021 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:18:02.021 Nvme0n1 : 7.00 8871.14 34.65 0.00 0.00 0.00 0.00 0.00 01:18:02.021 [2024-12-09T05:12:44.477Z] =================================================================================================================== 01:18:02.021 [2024-12-09T05:12:44.477Z] Total : 8871.14 34.65 0.00 0.00 0.00 0.00 0.00 01:18:02.021 01:18:02.586 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:18:02.586 Nvme0n1 : 8.00 8857.62 34.60 0.00 0.00 0.00 0.00 0.00 01:18:02.586 [2024-12-09T05:12:45.042Z] =================================================================================================================== 01:18:02.586 [2024-12-09T05:12:45.042Z] Total : 8857.62 34.60 0.00 0.00 0.00 0.00 0.00 01:18:02.586 01:18:03.965 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:18:03.965 Nvme0n1 : 9.00 8617.11 33.66 0.00 0.00 0.00 0.00 0.00 01:18:03.965 [2024-12-09T05:12:46.421Z] =================================================================================================================== 01:18:03.965 [2024-12-09T05:12:46.421Z] Total : 8617.11 33.66 0.00 0.00 0.00 0.00 0.00 01:18:03.965 01:18:04.542 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:18:04.542 Nvme0n1 : 10.00 7895.10 30.84 0.00 0.00 0.00 0.00 0.00 01:18:04.542 [2024-12-09T05:12:46.998Z] =================================================================================================================== 01:18:04.542 [2024-12-09T05:12:46.998Z] Total : 7895.10 30.84 0.00 0.00 0.00 0.00 0.00 01:18:04.542 01:18:04.542 01:18:04.542 Latency(us) 01:18:04.542 [2024-12-09T05:12:46.998Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:18:04.542 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:18:04.542 Nvme0n1 : 10.01 7898.49 30.85 0.00 0.00 16200.16 8299.32 1091617.98 01:18:04.542 [2024-12-09T05:12:46.998Z] =================================================================================================================== 01:18:04.542 [2024-12-09T05:12:46.998Z] Total : 7898.49 30.85 0.00 0.00 16200.16 8299.32 1091617.98 01:18:04.542 { 01:18:04.542 "results": [ 01:18:04.542 { 01:18:04.542 "job": "Nvme0n1", 01:18:04.542 "core_mask": "0x2", 01:18:04.542 "workload": "randwrite", 01:18:04.542 "status": "finished", 01:18:04.542 "queue_depth": 128, 01:18:04.542 "io_size": 4096, 01:18:04.542 "runtime": 10.011917, 01:18:04.542 "iops": 7898.487372598075, 01:18:04.542 "mibps": 30.85346629921123, 01:18:04.542 "io_failed": 0, 01:18:04.542 "io_timeout": 0, 01:18:04.542 "avg_latency_us": 16200.160121212046, 01:18:04.542 "min_latency_us": 8299.318777292576, 01:18:04.542 "max_latency_us": 1091617.9842794759 01:18:04.542 } 01:18:04.542 ], 01:18:04.542 "core_count": 1 01:18:04.542 } 01:18:04.805 05:12:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63635 01:18:04.805 05:12:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 63635 ']' 01:18:04.805 05:12:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 63635 01:18:04.805 05:12:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 01:18:04.805 05:12:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:18:04.805 05:12:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63635 01:18:04.805 killing process with pid 63635 01:18:04.805 Received shutdown signal, test time was about 10.000000 seconds 01:18:04.805 01:18:04.805 Latency(us) 01:18:04.805 [2024-12-09T05:12:47.261Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:18:04.805 [2024-12-09T05:12:47.261Z] =================================================================================================================== 01:18:04.805 [2024-12-09T05:12:47.261Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:18:04.805 05:12:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:18:04.805 05:12:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:18:04.805 05:12:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63635' 01:18:04.805 05:12:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 63635 01:18:04.805 05:12:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 63635 01:18:05.064 05:12:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 01:18:05.065 05:12:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:18:05.326 05:12:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cff54f30-757c-4005-9091-405ce32b4f5f 01:18:05.326 05:12:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 01:18:05.586 05:12:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 01:18:05.586 05:12:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 01:18:05.586 05:12:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 63296 01:18:05.586 05:12:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 63296 01:18:05.586 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 63296 Killed "${NVMF_APP[@]}" "$@" 01:18:05.586 05:12:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 01:18:05.586 05:12:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 01:18:05.586 05:12:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:18:05.586 05:12:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 01:18:05.586 05:12:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 01:18:05.586 05:12:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 01:18:05.586 05:12:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=63791 01:18:05.586 05:12:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 63791 01:18:05.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:18:05.586 05:12:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 63791 ']' 01:18:05.586 05:12:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:18:05.586 05:12:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 01:18:05.586 05:12:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:18:05.586 05:12:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 01:18:05.586 05:12:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 01:18:05.586 [2024-12-09 05:12:47.990491] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:18:05.586 [2024-12-09 05:12:47.990556] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:18:05.846 [2024-12-09 05:12:48.142892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:18:05.846 [2024-12-09 05:12:48.196175] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:18:05.846 [2024-12-09 05:12:48.196222] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:18:05.846 [2024-12-09 05:12:48.196228] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:18:05.846 [2024-12-09 05:12:48.196233] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:18:05.846 [2024-12-09 05:12:48.196238] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:18:05.846 [2024-12-09 05:12:48.196526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:18:05.846 [2024-12-09 05:12:48.239336] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:18:06.786 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:18:06.786 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 01:18:06.786 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:18:06.786 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 01:18:06.786 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 01:18:06.786 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:18:06.786 05:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 01:18:06.786 [2024-12-09 05:12:49.128786] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 01:18:06.786 [2024-12-09 05:12:49.129244] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 01:18:06.786 [2024-12-09 05:12:49.129506] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 01:18:06.786 05:12:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 01:18:06.786 05:12:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 63b1bb45-58af-4b2b-82a1-80f0e2758b51 01:18:06.786 05:12:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=63b1bb45-58af-4b2b-82a1-80f0e2758b51 01:18:06.786 05:12:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:18:06.786 05:12:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 01:18:06.786 05:12:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:18:06.786 05:12:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:18:06.786 05:12:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 01:18:07.046 05:12:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 63b1bb45-58af-4b2b-82a1-80f0e2758b51 -t 2000 01:18:07.305 [ 01:18:07.305 { 01:18:07.305 "name": "63b1bb45-58af-4b2b-82a1-80f0e2758b51", 01:18:07.306 "aliases": [ 01:18:07.306 "lvs/lvol" 01:18:07.306 ], 01:18:07.306 "product_name": "Logical Volume", 01:18:07.306 "block_size": 4096, 01:18:07.306 "num_blocks": 38912, 01:18:07.306 "uuid": "63b1bb45-58af-4b2b-82a1-80f0e2758b51", 01:18:07.306 "assigned_rate_limits": { 01:18:07.306 "rw_ios_per_sec": 0, 01:18:07.306 "rw_mbytes_per_sec": 0, 01:18:07.306 "r_mbytes_per_sec": 0, 01:18:07.306 "w_mbytes_per_sec": 0 01:18:07.306 }, 01:18:07.306 "claimed": false, 01:18:07.306 "zoned": false, 01:18:07.306 "supported_io_types": { 01:18:07.306 "read": true, 01:18:07.306 "write": true, 01:18:07.306 "unmap": true, 01:18:07.306 "flush": false, 01:18:07.306 "reset": true, 01:18:07.306 "nvme_admin": false, 01:18:07.306 "nvme_io": false, 01:18:07.306 "nvme_io_md": false, 01:18:07.306 "write_zeroes": true, 01:18:07.306 "zcopy": false, 01:18:07.306 "get_zone_info": false, 01:18:07.306 "zone_management": false, 01:18:07.306 "zone_append": false, 01:18:07.306 "compare": false, 01:18:07.306 "compare_and_write": false, 01:18:07.306 "abort": false, 01:18:07.306 "seek_hole": true, 01:18:07.306 "seek_data": true, 01:18:07.306 "copy": false, 01:18:07.306 "nvme_iov_md": false 01:18:07.306 }, 01:18:07.306 "driver_specific": { 01:18:07.306 "lvol": { 01:18:07.306 "lvol_store_uuid": "cff54f30-757c-4005-9091-405ce32b4f5f", 01:18:07.306 "base_bdev": "aio_bdev", 01:18:07.306 "thin_provision": false, 01:18:07.306 "num_allocated_clusters": 38, 01:18:07.306 "snapshot": false, 01:18:07.306 "clone": false, 01:18:07.306 "esnap_clone": false 01:18:07.306 } 01:18:07.306 } 01:18:07.306 } 01:18:07.306 ] 01:18:07.306 05:12:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 01:18:07.306 05:12:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cff54f30-757c-4005-9091-405ce32b4f5f 01:18:07.306 05:12:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 01:18:07.566 05:12:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 01:18:07.566 05:12:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cff54f30-757c-4005-9091-405ce32b4f5f 01:18:07.566 05:12:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 01:18:07.826 05:12:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 01:18:07.826 05:12:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 01:18:07.826 [2024-12-09 05:12:50.244304] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 01:18:08.086 05:12:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cff54f30-757c-4005-9091-405ce32b4f5f 01:18:08.086 05:12:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 01:18:08.086 05:12:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cff54f30-757c-4005-9091-405ce32b4f5f 01:18:08.086 05:12:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:18:08.086 05:12:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:18:08.086 05:12:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:18:08.086 05:12:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:18:08.086 05:12:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:18:08.086 05:12:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:18:08.086 05:12:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:18:08.086 05:12:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 01:18:08.086 05:12:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cff54f30-757c-4005-9091-405ce32b4f5f 01:18:08.086 request: 01:18:08.086 { 01:18:08.086 "uuid": "cff54f30-757c-4005-9091-405ce32b4f5f", 01:18:08.086 "method": "bdev_lvol_get_lvstores", 01:18:08.086 "req_id": 1 01:18:08.086 } 01:18:08.086 Got JSON-RPC error response 01:18:08.086 response: 01:18:08.086 { 01:18:08.086 "code": -19, 01:18:08.086 "message": "No such device" 01:18:08.086 } 01:18:08.086 05:12:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 01:18:08.086 05:12:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:18:08.086 05:12:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:18:08.086 05:12:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:18:08.086 05:12:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 01:18:08.346 aio_bdev 01:18:08.346 05:12:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 63b1bb45-58af-4b2b-82a1-80f0e2758b51 01:18:08.346 05:12:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=63b1bb45-58af-4b2b-82a1-80f0e2758b51 01:18:08.346 05:12:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:18:08.346 05:12:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 01:18:08.346 05:12:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:18:08.346 05:12:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:18:08.346 05:12:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 01:18:08.606 05:12:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 63b1bb45-58af-4b2b-82a1-80f0e2758b51 -t 2000 01:18:08.865 [ 01:18:08.865 { 01:18:08.865 "name": "63b1bb45-58af-4b2b-82a1-80f0e2758b51", 01:18:08.865 "aliases": [ 01:18:08.865 "lvs/lvol" 01:18:08.865 ], 01:18:08.865 "product_name": "Logical Volume", 01:18:08.865 "block_size": 4096, 01:18:08.865 "num_blocks": 38912, 01:18:08.865 "uuid": "63b1bb45-58af-4b2b-82a1-80f0e2758b51", 01:18:08.865 "assigned_rate_limits": { 01:18:08.865 "rw_ios_per_sec": 0, 01:18:08.865 "rw_mbytes_per_sec": 0, 01:18:08.865 "r_mbytes_per_sec": 0, 01:18:08.865 "w_mbytes_per_sec": 0 01:18:08.865 }, 01:18:08.865 "claimed": false, 01:18:08.865 "zoned": false, 01:18:08.865 "supported_io_types": { 01:18:08.865 "read": true, 01:18:08.865 "write": true, 01:18:08.865 "unmap": true, 01:18:08.865 "flush": false, 01:18:08.865 "reset": true, 01:18:08.865 "nvme_admin": false, 01:18:08.865 "nvme_io": false, 01:18:08.866 "nvme_io_md": false, 01:18:08.866 "write_zeroes": true, 01:18:08.866 "zcopy": false, 01:18:08.866 "get_zone_info": false, 01:18:08.866 "zone_management": false, 01:18:08.866 "zone_append": false, 01:18:08.866 "compare": false, 01:18:08.866 "compare_and_write": false, 01:18:08.866 "abort": false, 01:18:08.866 "seek_hole": true, 01:18:08.866 "seek_data": true, 01:18:08.866 "copy": false, 01:18:08.866 "nvme_iov_md": false 01:18:08.866 }, 01:18:08.866 "driver_specific": { 01:18:08.866 "lvol": { 01:18:08.866 "lvol_store_uuid": "cff54f30-757c-4005-9091-405ce32b4f5f", 01:18:08.866 "base_bdev": "aio_bdev", 01:18:08.866 "thin_provision": false, 01:18:08.866 "num_allocated_clusters": 38, 01:18:08.866 "snapshot": false, 01:18:08.866 "clone": false, 01:18:08.866 "esnap_clone": false 01:18:08.866 } 01:18:08.866 } 01:18:08.866 } 01:18:08.866 ] 01:18:08.866 05:12:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 01:18:08.866 05:12:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cff54f30-757c-4005-9091-405ce32b4f5f 01:18:08.866 05:12:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 01:18:09.126 05:12:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 01:18:09.126 05:12:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 01:18:09.126 05:12:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cff54f30-757c-4005-9091-405ce32b4f5f 01:18:09.126 05:12:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 01:18:09.126 05:12:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 63b1bb45-58af-4b2b-82a1-80f0e2758b51 01:18:09.405 05:12:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u cff54f30-757c-4005-9091-405ce32b4f5f 01:18:09.664 05:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 01:18:09.924 05:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 01:18:10.492 ************************************ 01:18:10.492 END TEST lvs_grow_dirty 01:18:10.492 ************************************ 01:18:10.492 01:18:10.492 real 0m19.417s 01:18:10.492 user 0m40.167s 01:18:10.492 sys 0m6.846s 01:18:10.492 05:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 01:18:10.492 05:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 01:18:10.492 05:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 01:18:10.492 05:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 01:18:10.492 05:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 01:18:10.492 05:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 01:18:10.492 05:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 01:18:10.492 05:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 01:18:10.492 05:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 01:18:10.492 05:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 01:18:10.492 05:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 01:18:10.492 nvmf_trace.0 01:18:10.492 05:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 01:18:10.492 05:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 01:18:10.492 05:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 01:18:10.492 05:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 01:18:11.912 05:12:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:18:11.912 05:12:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 01:18:11.912 05:12:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 01:18:11.912 05:12:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:18:11.912 rmmod nvme_tcp 01:18:11.912 rmmod nvme_fabrics 01:18:11.912 rmmod nvme_keyring 01:18:11.912 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:18:11.912 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 01:18:11.912 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 01:18:11.912 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 63791 ']' 01:18:11.912 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 63791 01:18:11.912 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 63791 ']' 01:18:11.912 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 63791 01:18:11.912 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 01:18:11.912 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:18:11.912 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63791 01:18:11.912 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:18:11.912 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:18:11.912 killing process with pid 63791 01:18:11.912 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63791' 01:18:11.912 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 63791 01:18:11.912 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 63791 01:18:11.912 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:18:11.912 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:18:11.912 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:18:11.912 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 01:18:11.912 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:18:11.912 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 01:18:11.912 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 01:18:11.912 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:18:11.912 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:18:11.912 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:18:11.912 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:18:11.912 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:18:12.171 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:18:12.171 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:18:12.171 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:18:12.171 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:18:12.171 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:18:12.171 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:18:12.171 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:18:12.171 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:18:12.171 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:18:12.171 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:18:12.171 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 01:18:12.171 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:18:12.171 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:18:12.171 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:18:12.171 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 01:18:12.171 01:18:12.171 real 0m40.568s 01:18:12.171 user 1m3.330s 01:18:12.171 sys 0m11.190s 01:18:12.171 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 01:18:12.171 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 01:18:12.171 ************************************ 01:18:12.171 END TEST nvmf_lvs_grow 01:18:12.171 ************************************ 01:18:12.430 05:12:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 01:18:12.430 05:12:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:18:12.430 05:12:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 01:18:12.430 05:12:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 01:18:12.430 ************************************ 01:18:12.430 START TEST nvmf_bdev_io_wait 01:18:12.430 ************************************ 01:18:12.430 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 01:18:12.430 * Looking for test storage... 01:18:12.430 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:18:12.430 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:18:12.430 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 01:18:12.430 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:18:12.430 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:18:12.430 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:18:12.430 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 01:18:12.430 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 01:18:12.430 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 01:18:12.430 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 01:18:12.430 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 01:18:12.430 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 01:18:12.430 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 01:18:12.430 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 01:18:12.430 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 01:18:12.430 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:18:12.430 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 01:18:12.430 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 01:18:12.430 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 01:18:12.430 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:18:12.430 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 01:18:12.430 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 01:18:12.430 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:18:12.430 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 01:18:12.430 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 01:18:12.430 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 01:18:12.430 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 01:18:12.430 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:18:12.430 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 01:18:12.430 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 01:18:12.430 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:18:12.430 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:18:12.430 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 01:18:12.430 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:18:12.430 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:18:12.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:18:12.430 --rc genhtml_branch_coverage=1 01:18:12.430 --rc genhtml_function_coverage=1 01:18:12.430 --rc genhtml_legend=1 01:18:12.430 --rc geninfo_all_blocks=1 01:18:12.430 --rc geninfo_unexecuted_blocks=1 01:18:12.430 01:18:12.430 ' 01:18:12.430 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:18:12.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:18:12.430 --rc genhtml_branch_coverage=1 01:18:12.430 --rc genhtml_function_coverage=1 01:18:12.430 --rc genhtml_legend=1 01:18:12.430 --rc geninfo_all_blocks=1 01:18:12.430 --rc geninfo_unexecuted_blocks=1 01:18:12.430 01:18:12.430 ' 01:18:12.430 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:18:12.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:18:12.430 --rc genhtml_branch_coverage=1 01:18:12.430 --rc genhtml_function_coverage=1 01:18:12.430 --rc genhtml_legend=1 01:18:12.430 --rc geninfo_all_blocks=1 01:18:12.430 --rc geninfo_unexecuted_blocks=1 01:18:12.430 01:18:12.430 ' 01:18:12.430 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:18:12.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:18:12.430 --rc genhtml_branch_coverage=1 01:18:12.430 --rc genhtml_function_coverage=1 01:18:12.430 --rc genhtml_legend=1 01:18:12.430 --rc geninfo_all_blocks=1 01:18:12.430 --rc geninfo_unexecuted_blocks=1 01:18:12.430 01:18:12.430 ' 01:18:12.430 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:18:12.690 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 01:18:12.690 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:18:12.690 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:18:12.690 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:18:12.690 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:18:12.690 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:18:12.690 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:18:12.690 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:18:12.690 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:18:12.690 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:18:12.690 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:18:12.690 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:18:12.690 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:18:12.690 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:18:12.690 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:18:12.690 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:18:12.690 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:18:12.690 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:18:12.690 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 01:18:12.690 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:18:12.690 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:18:12.690 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:18:12.690 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:18:12.690 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:18:12.690 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:18:12.690 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 01:18:12.690 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:18:12.690 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 01:18:12.690 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:18:12.690 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:18:12.690 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:18:12.690 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:18:12.690 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:18:12.690 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:18:12.690 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:18:12.690 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:18:12.690 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:18:12.690 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 01:18:12.690 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 01:18:12.690 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:18:12.690 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 01:18:12.690 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:18:12.690 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:18:12.690 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 01:18:12.690 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 01:18:12.690 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 01:18:12.690 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:18:12.690 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:18:12.690 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:18:12.690 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:18:12.690 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:18:12.690 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:18:12.690 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:18:12.690 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:18:12.690 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 01:18:12.690 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:18:12.690 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:18:12.690 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:18:12.690 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:18:12.690 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:18:12.690 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:18:12.690 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:18:12.690 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:18:12.690 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:18:12.690 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:18:12.690 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:18:12.690 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:18:12.690 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:18:12.690 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:18:12.690 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:18:12.690 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:18:12.690 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:18:12.690 Cannot find device "nvmf_init_br" 01:18:12.690 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 01:18:12.691 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:18:12.691 Cannot find device "nvmf_init_br2" 01:18:12.691 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 01:18:12.691 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:18:12.691 Cannot find device "nvmf_tgt_br" 01:18:12.691 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 01:18:12.691 05:12:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:18:12.691 Cannot find device "nvmf_tgt_br2" 01:18:12.691 05:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 01:18:12.691 05:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:18:12.691 Cannot find device "nvmf_init_br" 01:18:12.691 05:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 01:18:12.691 05:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:18:12.691 Cannot find device "nvmf_init_br2" 01:18:12.691 05:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 01:18:12.691 05:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:18:12.691 Cannot find device "nvmf_tgt_br" 01:18:12.691 05:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 01:18:12.691 05:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:18:12.691 Cannot find device "nvmf_tgt_br2" 01:18:12.691 05:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 01:18:12.691 05:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:18:12.691 Cannot find device "nvmf_br" 01:18:12.691 05:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 01:18:12.691 05:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:18:12.691 Cannot find device "nvmf_init_if" 01:18:12.691 05:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 01:18:12.691 05:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:18:12.691 Cannot find device "nvmf_init_if2" 01:18:12.691 05:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 01:18:12.691 05:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:18:12.691 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:18:12.691 05:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 01:18:12.691 05:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:18:12.691 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:18:12.691 05:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 01:18:12.691 05:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:18:12.691 05:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:18:12.950 05:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:18:12.950 05:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:18:12.950 05:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:18:12.950 05:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:18:12.950 05:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:18:12.950 05:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:18:12.950 05:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:18:12.950 05:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:18:12.950 05:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:18:12.950 05:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:18:12.950 05:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:18:12.950 05:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:18:12.950 05:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:18:12.950 05:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:18:12.950 05:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:18:12.950 05:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:18:12.950 05:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:18:12.950 05:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:18:12.950 05:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:18:12.950 05:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:18:12.950 05:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:18:12.950 05:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:18:12.950 05:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:18:12.950 05:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:18:12.950 05:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:18:12.950 05:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:18:12.950 05:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:18:12.950 05:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:18:12.950 05:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:18:12.950 05:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:18:12.950 05:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:18:12.950 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:18:12.950 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.119 ms 01:18:12.950 01:18:12.950 --- 10.0.0.3 ping statistics --- 01:18:12.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:18:12.950 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 01:18:12.950 05:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:18:12.950 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:18:12.950 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.086 ms 01:18:12.950 01:18:12.950 --- 10.0.0.4 ping statistics --- 01:18:12.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:18:12.950 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 01:18:12.950 05:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:18:12.950 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:18:12.950 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 01:18:12.950 01:18:12.950 --- 10.0.0.1 ping statistics --- 01:18:12.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:18:12.950 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 01:18:13.208 05:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:18:13.208 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:18:13.208 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 01:18:13.208 01:18:13.208 --- 10.0.0.2 ping statistics --- 01:18:13.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:18:13.209 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 01:18:13.209 05:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:18:13.209 05:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 01:18:13.209 05:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:18:13.209 05:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:18:13.209 05:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:18:13.209 05:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:18:13.209 05:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:18:13.209 05:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:18:13.209 05:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:18:13.209 05:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 01:18:13.209 05:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:18:13.209 05:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 01:18:13.209 05:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:18:13.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:18:13.209 05:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=64170 01:18:13.209 05:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 64170 01:18:13.209 05:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 64170 ']' 01:18:13.209 05:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:18:13.209 05:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 01:18:13.209 05:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:18:13.209 05:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 01:18:13.209 05:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:18:13.209 05:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 01:18:13.209 [2024-12-09 05:12:55.520297] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:18:13.209 [2024-12-09 05:12:55.520382] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:18:13.468 [2024-12-09 05:12:55.677907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:18:13.468 [2024-12-09 05:12:55.738008] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:18:13.468 [2024-12-09 05:12:55.738065] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:18:13.468 [2024-12-09 05:12:55.738074] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:18:13.468 [2024-12-09 05:12:55.738080] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:18:13.468 [2024-12-09 05:12:55.738084] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:18:13.468 [2024-12-09 05:12:55.739104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:18:13.468 [2024-12-09 05:12:55.739210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:18:13.468 [2024-12-09 05:12:55.739339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:18:13.468 [2024-12-09 05:12:55.739356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:18:14.037 05:12:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:18:14.037 05:12:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 01:18:14.037 05:12:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:18:14.037 05:12:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 01:18:14.037 05:12:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:18:14.296 05:12:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:18:14.296 05:12:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 01:18:14.296 05:12:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 01:18:14.296 05:12:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:18:14.296 05:12:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:18:14.296 05:12:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 01:18:14.296 05:12:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 01:18:14.296 05:12:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:18:14.296 [2024-12-09 05:12:56.582901] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:18:14.296 05:12:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:18:14.296 05:12:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:18:14.296 05:12:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 01:18:14.296 05:12:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:18:14.296 [2024-12-09 05:12:56.598328] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:18:14.296 05:12:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:18:14.296 05:12:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 01:18:14.296 05:12:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 01:18:14.296 05:12:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:18:14.296 Malloc0 01:18:14.296 05:12:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:18:14.296 05:12:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:18:14.296 05:12:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 01:18:14.296 05:12:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:18:14.296 05:12:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:18:14.296 05:12:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:18:14.296 05:12:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 01:18:14.296 05:12:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:18:14.296 05:12:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:18:14.296 05:12:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:18:14.296 05:12:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 01:18:14.296 05:12:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:18:14.296 [2024-12-09 05:12:56.661651] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:18:14.296 05:12:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:18:14.296 05:12:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=64205 01:18:14.296 05:12:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 01:18:14.296 05:12:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 01:18:14.296 05:12:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=64207 01:18:14.296 05:12:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 01:18:14.296 05:12:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 01:18:14.296 05:12:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:18:14.296 05:12:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:18:14.296 { 01:18:14.296 "params": { 01:18:14.296 "name": "Nvme$subsystem", 01:18:14.296 "trtype": "$TEST_TRANSPORT", 01:18:14.296 "traddr": "$NVMF_FIRST_TARGET_IP", 01:18:14.296 "adrfam": "ipv4", 01:18:14.296 "trsvcid": "$NVMF_PORT", 01:18:14.296 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:18:14.296 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:18:14.296 "hdgst": ${hdgst:-false}, 01:18:14.296 "ddgst": ${ddgst:-false} 01:18:14.296 }, 01:18:14.296 "method": "bdev_nvme_attach_controller" 01:18:14.296 } 01:18:14.296 EOF 01:18:14.296 )") 01:18:14.296 05:12:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 01:18:14.296 05:12:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 01:18:14.296 05:12:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 01:18:14.296 05:12:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 01:18:14.296 05:12:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:18:14.296 05:12:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:18:14.296 { 01:18:14.296 "params": { 01:18:14.296 "name": "Nvme$subsystem", 01:18:14.296 "trtype": "$TEST_TRANSPORT", 01:18:14.296 "traddr": "$NVMF_FIRST_TARGET_IP", 01:18:14.296 "adrfam": "ipv4", 01:18:14.296 "trsvcid": "$NVMF_PORT", 01:18:14.296 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:18:14.296 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:18:14.296 "hdgst": ${hdgst:-false}, 01:18:14.296 "ddgst": ${ddgst:-false} 01:18:14.296 }, 01:18:14.296 "method": "bdev_nvme_attach_controller" 01:18:14.296 } 01:18:14.296 EOF 01:18:14.296 )") 01:18:14.296 05:12:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 01:18:14.296 05:12:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 01:18:14.296 05:12:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 01:18:14.296 05:12:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 01:18:14.296 05:12:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 01:18:14.296 05:12:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:18:14.296 05:12:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:18:14.296 { 01:18:14.296 "params": { 01:18:14.296 "name": "Nvme$subsystem", 01:18:14.296 "trtype": "$TEST_TRANSPORT", 01:18:14.296 "traddr": "$NVMF_FIRST_TARGET_IP", 01:18:14.296 "adrfam": "ipv4", 01:18:14.296 "trsvcid": "$NVMF_PORT", 01:18:14.296 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:18:14.296 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:18:14.296 "hdgst": ${hdgst:-false}, 01:18:14.296 "ddgst": ${ddgst:-false} 01:18:14.296 }, 01:18:14.296 "method": "bdev_nvme_attach_controller" 01:18:14.296 } 01:18:14.296 EOF 01:18:14.296 )") 01:18:14.296 05:12:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=64209 01:18:14.296 05:12:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=64214 01:18:14.296 05:12:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 01:18:14.296 05:12:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 01:18:14.296 05:12:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 01:18:14.296 05:12:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 01:18:14.296 05:12:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 01:18:14.296 05:12:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 01:18:14.296 05:12:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 01:18:14.296 05:12:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 01:18:14.296 05:12:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:18:14.296 05:12:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 01:18:14.296 05:12:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:18:14.296 { 01:18:14.296 "params": { 01:18:14.296 "name": "Nvme$subsystem", 01:18:14.296 "trtype": "$TEST_TRANSPORT", 01:18:14.296 "traddr": "$NVMF_FIRST_TARGET_IP", 01:18:14.296 "adrfam": "ipv4", 01:18:14.296 "trsvcid": "$NVMF_PORT", 01:18:14.296 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:18:14.296 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:18:14.296 "hdgst": ${hdgst:-false}, 01:18:14.296 "ddgst": ${ddgst:-false} 01:18:14.296 }, 01:18:14.297 "method": "bdev_nvme_attach_controller" 01:18:14.297 } 01:18:14.297 EOF 01:18:14.297 )") 01:18:14.297 05:12:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 01:18:14.297 05:12:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 01:18:14.297 05:12:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:18:14.297 "params": { 01:18:14.297 "name": "Nvme1", 01:18:14.297 "trtype": "tcp", 01:18:14.297 "traddr": "10.0.0.3", 01:18:14.297 "adrfam": "ipv4", 01:18:14.297 "trsvcid": "4420", 01:18:14.297 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:18:14.297 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:18:14.297 "hdgst": false, 01:18:14.297 "ddgst": false 01:18:14.297 }, 01:18:14.297 "method": "bdev_nvme_attach_controller" 01:18:14.297 }' 01:18:14.297 05:12:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 01:18:14.297 05:12:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 01:18:14.297 05:12:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 01:18:14.297 05:12:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:18:14.297 "params": { 01:18:14.297 "name": "Nvme1", 01:18:14.297 "trtype": "tcp", 01:18:14.297 "traddr": "10.0.0.3", 01:18:14.297 "adrfam": "ipv4", 01:18:14.297 "trsvcid": "4420", 01:18:14.297 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:18:14.297 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:18:14.297 "hdgst": false, 01:18:14.297 "ddgst": false 01:18:14.297 }, 01:18:14.297 "method": "bdev_nvme_attach_controller" 01:18:14.297 }' 01:18:14.297 05:12:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 01:18:14.297 05:12:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:18:14.297 "params": { 01:18:14.297 "name": "Nvme1", 01:18:14.297 "trtype": "tcp", 01:18:14.297 "traddr": "10.0.0.3", 01:18:14.297 "adrfam": "ipv4", 01:18:14.297 "trsvcid": "4420", 01:18:14.297 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:18:14.297 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:18:14.297 "hdgst": false, 01:18:14.297 "ddgst": false 01:18:14.297 }, 01:18:14.297 "method": "bdev_nvme_attach_controller" 01:18:14.297 }' 01:18:14.297 05:12:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 01:18:14.297 05:12:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:18:14.297 "params": { 01:18:14.297 "name": "Nvme1", 01:18:14.297 "trtype": "tcp", 01:18:14.297 "traddr": "10.0.0.3", 01:18:14.297 "adrfam": "ipv4", 01:18:14.297 "trsvcid": "4420", 01:18:14.297 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:18:14.297 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:18:14.297 "hdgst": false, 01:18:14.297 "ddgst": false 01:18:14.297 }, 01:18:14.297 "method": "bdev_nvme_attach_controller" 01:18:14.297 }' 01:18:14.297 [2024-12-09 05:12:56.727871] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:18:14.297 [2024-12-09 05:12:56.728442] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 01:18:14.297 [2024-12-09 05:12:56.743782] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:18:14.297 [2024-12-09 05:12:56.743848] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 01:18:14.556 [2024-12-09 05:12:56.752308] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:18:14.556 [2024-12-09 05:12:56.752474] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 01:18:14.556 [2024-12-09 05:12:56.753602] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:18:14.556 [2024-12-09 05:12:56.753774] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 01:18:14.556 05:12:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 64205 01:18:14.556 [2024-12-09 05:12:56.935205] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:18:14.556 [2024-12-09 05:12:56.984503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 01:18:14.556 [2024-12-09 05:12:56.997499] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:18:14.556 [2024-12-09 05:12:56.998296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:18:14.815 [2024-12-09 05:12:57.046482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 01:18:14.815 [2024-12-09 05:12:57.059268] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:18:14.815 [2024-12-09 05:12:57.065366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:18:14.815 [2024-12-09 05:12:57.113796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 01:18:14.815 Running I/O for 1 seconds... 01:18:14.815 [2024-12-09 05:12:57.126413] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:18:14.815 [2024-12-09 05:12:57.127418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:18:14.815 Running I/O for 1 seconds... 01:18:14.815 [2024-12-09 05:12:57.174915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 01:18:14.815 [2024-12-09 05:12:57.187395] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:18:14.815 Running I/O for 1 seconds... 01:18:15.074 Running I/O for 1 seconds... 01:18:16.014 9389.00 IOPS, 36.68 MiB/s 01:18:16.014 Latency(us) 01:18:16.014 [2024-12-09T05:12:58.470Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:18:16.014 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 01:18:16.014 Nvme1n1 : 1.01 9431.11 36.84 0.00 0.00 13507.11 7955.90 18315.74 01:18:16.014 [2024-12-09T05:12:58.470Z] =================================================================================================================== 01:18:16.014 [2024-12-09T05:12:58.470Z] Total : 9431.11 36.84 0.00 0.00 13507.11 7955.90 18315.74 01:18:16.014 8400.00 IOPS, 32.81 MiB/s 01:18:16.014 Latency(us) 01:18:16.014 [2024-12-09T05:12:58.470Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:18:16.014 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 01:18:16.014 Nvme1n1 : 1.01 8458.61 33.04 0.00 0.00 15060.07 7269.06 25642.03 01:18:16.014 [2024-12-09T05:12:58.470Z] =================================================================================================================== 01:18:16.014 [2024-12-09T05:12:58.470Z] Total : 8458.61 33.04 0.00 0.00 15060.07 7269.06 25642.03 01:18:16.014 9240.00 IOPS, 36.09 MiB/s 01:18:16.014 Latency(us) 01:18:16.014 [2024-12-09T05:12:58.470Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:18:16.014 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 01:18:16.014 Nvme1n1 : 1.01 9322.16 36.41 0.00 0.00 13680.89 6009.85 25069.67 01:18:16.014 [2024-12-09T05:12:58.470Z] =================================================================================================================== 01:18:16.014 [2024-12-09T05:12:58.470Z] Total : 9322.16 36.41 0.00 0.00 13680.89 6009.85 25069.67 01:18:16.014 179592.00 IOPS, 701.53 MiB/s 01:18:16.014 Latency(us) 01:18:16.014 [2024-12-09T05:12:58.470Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:18:16.014 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 01:18:16.014 Nvme1n1 : 1.00 179238.65 700.15 0.00 0.00 710.29 336.27 1974.67 01:18:16.014 [2024-12-09T05:12:58.470Z] =================================================================================================================== 01:18:16.014 [2024-12-09T05:12:58.470Z] Total : 179238.65 700.15 0.00 0.00 710.29 336.27 1974.67 01:18:16.014 05:12:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 64207 01:18:16.014 05:12:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 64209 01:18:16.274 05:12:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 64214 01:18:16.274 05:12:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:18:16.274 05:12:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 01:18:16.274 05:12:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:18:16.275 05:12:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:18:16.275 05:12:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 01:18:16.275 05:12:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 01:18:16.275 05:12:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 01:18:16.275 05:12:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 01:18:16.275 05:12:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:18:16.275 05:12:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 01:18:16.275 05:12:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 01:18:16.275 05:12:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:18:16.275 rmmod nvme_tcp 01:18:16.275 rmmod nvme_fabrics 01:18:16.275 rmmod nvme_keyring 01:18:16.275 05:12:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:18:16.275 05:12:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 01:18:16.275 05:12:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 01:18:16.275 05:12:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 64170 ']' 01:18:16.275 05:12:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 64170 01:18:16.275 05:12:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 64170 ']' 01:18:16.275 05:12:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 64170 01:18:16.275 05:12:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 01:18:16.275 05:12:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:18:16.275 05:12:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64170 01:18:16.275 05:12:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:18:16.275 05:12:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:18:16.275 05:12:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64170' 01:18:16.275 killing process with pid 64170 01:18:16.275 05:12:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 64170 01:18:16.275 05:12:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 64170 01:18:16.535 05:12:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:18:16.535 05:12:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:18:16.535 05:12:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:18:16.535 05:12:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 01:18:16.535 05:12:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 01:18:16.535 05:12:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:18:16.535 05:12:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 01:18:16.535 05:12:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:18:16.535 05:12:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:18:16.535 05:12:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:18:16.535 05:12:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:18:16.535 05:12:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:18:16.536 05:12:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:18:16.536 05:12:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:18:16.536 05:12:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:18:16.536 05:12:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:18:16.796 05:12:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:18:16.796 05:12:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:18:16.796 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:18:16.796 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:18:16.796 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:18:16.796 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:18:16.796 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 01:18:16.796 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:18:16.796 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:18:16.796 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:18:16.796 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 01:18:16.796 01:18:16.796 real 0m4.503s 01:18:16.796 user 0m17.485s 01:18:16.796 sys 0m2.202s 01:18:16.796 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 01:18:16.796 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:18:16.796 ************************************ 01:18:16.796 END TEST nvmf_bdev_io_wait 01:18:16.796 ************************************ 01:18:16.796 05:12:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 01:18:16.796 05:12:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:18:16.796 05:12:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 01:18:16.796 05:12:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 01:18:16.796 ************************************ 01:18:16.796 START TEST nvmf_queue_depth 01:18:16.796 ************************************ 01:18:16.796 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 01:18:17.057 * Looking for test storage... 01:18:17.057 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:18:17.057 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:18:17.057 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 01:18:17.057 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:18:17.057 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:18:17.057 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:18:17.057 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 01:18:17.057 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 01:18:17.057 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 01:18:17.057 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 01:18:17.057 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 01:18:17.057 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 01:18:17.057 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 01:18:17.057 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 01:18:17.057 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 01:18:17.057 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:18:17.057 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 01:18:17.057 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 01:18:17.057 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 01:18:17.057 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:18:17.057 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 01:18:17.057 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 01:18:17.057 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:18:17.057 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 01:18:17.057 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 01:18:17.057 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 01:18:17.057 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 01:18:17.057 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:18:17.057 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 01:18:17.057 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 01:18:17.057 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:18:17.057 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:18:17.057 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 01:18:17.057 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:18:17.057 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:18:17.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:18:17.057 --rc genhtml_branch_coverage=1 01:18:17.057 --rc genhtml_function_coverage=1 01:18:17.057 --rc genhtml_legend=1 01:18:17.057 --rc geninfo_all_blocks=1 01:18:17.057 --rc geninfo_unexecuted_blocks=1 01:18:17.057 01:18:17.057 ' 01:18:17.057 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:18:17.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:18:17.057 --rc genhtml_branch_coverage=1 01:18:17.057 --rc genhtml_function_coverage=1 01:18:17.057 --rc genhtml_legend=1 01:18:17.057 --rc geninfo_all_blocks=1 01:18:17.057 --rc geninfo_unexecuted_blocks=1 01:18:17.057 01:18:17.057 ' 01:18:17.057 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:18:17.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:18:17.057 --rc genhtml_branch_coverage=1 01:18:17.057 --rc genhtml_function_coverage=1 01:18:17.057 --rc genhtml_legend=1 01:18:17.057 --rc geninfo_all_blocks=1 01:18:17.057 --rc geninfo_unexecuted_blocks=1 01:18:17.057 01:18:17.057 ' 01:18:17.057 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:18:17.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:18:17.057 --rc genhtml_branch_coverage=1 01:18:17.057 --rc genhtml_function_coverage=1 01:18:17.057 --rc genhtml_legend=1 01:18:17.057 --rc geninfo_all_blocks=1 01:18:17.057 --rc geninfo_unexecuted_blocks=1 01:18:17.057 01:18:17.057 ' 01:18:17.057 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:18:17.057 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 01:18:17.057 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:18:17.057 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:18:17.057 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:18:17.057 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:18:17.057 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:18:17.057 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:18:17.057 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:18:17.057 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:18:17.057 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:18:17.057 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:18:17.057 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:18:17.057 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:18:17.057 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:18:17.057 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:18:17.057 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:18:17.057 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:18:17.057 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:18:17.057 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 01:18:17.057 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:18:17.057 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:18:17.057 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:18:17.057 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:18:17.057 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:18:17.058 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:18:17.058 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 01:18:17.058 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:18:17.058 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 01:18:17.058 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:18:17.058 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:18:17.058 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:18:17.058 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:18:17.058 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:18:17.058 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:18:17.319 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:18:17.319 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:18:17.319 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:18:17.319 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 01:18:17.319 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 01:18:17.319 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 01:18:17.319 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:18:17.319 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 01:18:17.319 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:18:17.319 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:18:17.319 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 01:18:17.319 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 01:18:17.319 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 01:18:17.319 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:18:17.319 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:18:17.319 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:18:17.319 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:18:17.319 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:18:17.319 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:18:17.319 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:18:17.319 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:18:17.319 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 01:18:17.319 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:18:17.319 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:18:17.319 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:18:17.319 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:18:17.319 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:18:17.319 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:18:17.319 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:18:17.319 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:18:17.319 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:18:17.319 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:18:17.319 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:18:17.319 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:18:17.319 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:18:17.319 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:18:17.319 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:18:17.319 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:18:17.319 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:18:17.319 Cannot find device "nvmf_init_br" 01:18:17.319 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 01:18:17.319 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:18:17.319 Cannot find device "nvmf_init_br2" 01:18:17.319 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 01:18:17.319 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:18:17.319 Cannot find device "nvmf_tgt_br" 01:18:17.319 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 01:18:17.319 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:18:17.319 Cannot find device "nvmf_tgt_br2" 01:18:17.319 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 01:18:17.319 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:18:17.319 Cannot find device "nvmf_init_br" 01:18:17.319 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 01:18:17.319 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:18:17.320 Cannot find device "nvmf_init_br2" 01:18:17.320 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 01:18:17.320 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:18:17.320 Cannot find device "nvmf_tgt_br" 01:18:17.320 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 01:18:17.320 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:18:17.320 Cannot find device "nvmf_tgt_br2" 01:18:17.320 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 01:18:17.320 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:18:17.320 Cannot find device "nvmf_br" 01:18:17.320 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 01:18:17.320 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:18:17.320 Cannot find device "nvmf_init_if" 01:18:17.320 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 01:18:17.320 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:18:17.320 Cannot find device "nvmf_init_if2" 01:18:17.320 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 01:18:17.320 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:18:17.320 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:18:17.320 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 01:18:17.320 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:18:17.320 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:18:17.320 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 01:18:17.320 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:18:17.320 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:18:17.320 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:18:17.320 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:18:17.320 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:18:17.580 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:18:17.580 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:18:17.580 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:18:17.580 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:18:17.580 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:18:17.580 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:18:17.580 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:18:17.580 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:18:17.580 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:18:17.580 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:18:17.580 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:18:17.581 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:18:17.581 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:18:17.581 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:18:17.581 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:18:17.581 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:18:17.581 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:18:17.581 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:18:17.581 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:18:17.581 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:18:17.581 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:18:17.581 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:18:17.581 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:18:17.581 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:18:17.581 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:18:17.581 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:18:17.581 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:18:17.581 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:18:17.581 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:18:17.581 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.124 ms 01:18:17.581 01:18:17.581 --- 10.0.0.3 ping statistics --- 01:18:17.581 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:18:17.581 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 01:18:17.581 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:18:17.581 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:18:17.581 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.096 ms 01:18:17.581 01:18:17.581 --- 10.0.0.4 ping statistics --- 01:18:17.581 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:18:17.581 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 01:18:17.581 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:18:17.581 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:18:17.581 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 01:18:17.581 01:18:17.581 --- 10.0.0.1 ping statistics --- 01:18:17.581 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:18:17.581 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 01:18:17.581 05:12:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:18:17.581 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:18:17.581 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 01:18:17.581 01:18:17.581 --- 10.0.0.2 ping statistics --- 01:18:17.581 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:18:17.581 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 01:18:17.581 05:13:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:18:17.581 05:13:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 01:18:17.581 05:13:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:18:17.581 05:13:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:18:17.581 05:13:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:18:17.581 05:13:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:18:17.581 05:13:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:18:17.581 05:13:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:18:17.581 05:13:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:18:17.840 05:13:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 01:18:17.840 05:13:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:18:17.841 05:13:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 01:18:17.841 05:13:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:18:17.841 05:13:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=64495 01:18:17.841 05:13:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 01:18:17.841 05:13:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 64495 01:18:17.841 05:13:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 64495 ']' 01:18:17.841 05:13:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:18:17.841 05:13:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 01:18:17.841 05:13:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:18:17.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:18:17.841 05:13:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 01:18:17.841 05:13:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:18:17.841 [2024-12-09 05:13:00.107906] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:18:17.841 [2024-12-09 05:13:00.108063] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:18:17.841 [2024-12-09 05:13:00.262104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:18:18.101 [2024-12-09 05:13:00.318305] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:18:18.101 [2024-12-09 05:13:00.318439] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:18:18.101 [2024-12-09 05:13:00.318510] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:18:18.101 [2024-12-09 05:13:00.318518] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:18:18.101 [2024-12-09 05:13:00.318524] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:18:18.101 [2024-12-09 05:13:00.318808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:18:18.101 [2024-12-09 05:13:00.361386] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:18:18.670 05:13:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:18:18.670 05:13:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 01:18:18.670 05:13:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:18:18.670 05:13:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 01:18:18.670 05:13:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:18:18.670 05:13:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:18:18.670 05:13:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:18:18.670 05:13:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 01:18:18.670 05:13:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:18:18.670 [2024-12-09 05:13:01.087840] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:18:18.670 05:13:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:18:18.670 05:13:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 01:18:18.670 05:13:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 01:18:18.670 05:13:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:18:18.670 Malloc0 01:18:18.670 05:13:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:18:18.670 05:13:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:18:18.670 05:13:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 01:18:18.670 05:13:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:18:18.930 05:13:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:18:18.930 05:13:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:18:18.930 05:13:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 01:18:18.930 05:13:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:18:18.930 05:13:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:18:18.930 05:13:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:18:18.930 05:13:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 01:18:18.930 05:13:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:18:18.930 [2024-12-09 05:13:01.146984] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:18:18.930 05:13:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:18:18.930 05:13:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=64527 01:18:18.930 05:13:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 01:18:18.930 05:13:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:18:18.930 05:13:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 64527 /var/tmp/bdevperf.sock 01:18:18.930 05:13:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 64527 ']' 01:18:18.930 05:13:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:18:18.930 05:13:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 01:18:18.930 05:13:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:18:18.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:18:18.930 05:13:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 01:18:18.930 05:13:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:18:18.930 [2024-12-09 05:13:01.208835] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:18:18.930 [2024-12-09 05:13:01.208912] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64527 ] 01:18:18.930 [2024-12-09 05:13:01.360176] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:18:19.189 [2024-12-09 05:13:01.418274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:18:19.189 [2024-12-09 05:13:01.461372] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:18:19.757 05:13:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:18:19.757 05:13:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 01:18:19.757 05:13:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 01:18:19.757 05:13:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 01:18:19.757 05:13:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:18:19.757 NVMe0n1 01:18:19.757 05:13:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:18:19.757 05:13:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:18:20.015 Running I/O for 10 seconds... 01:18:21.890 8222.00 IOPS, 32.12 MiB/s [2024-12-09T05:13:05.725Z] 8974.50 IOPS, 35.06 MiB/s [2024-12-09T05:13:06.347Z] 9311.00 IOPS, 36.37 MiB/s [2024-12-09T05:13:07.723Z] 9675.25 IOPS, 37.79 MiB/s [2024-12-09T05:13:08.659Z] 9853.60 IOPS, 38.49 MiB/s [2024-12-09T05:13:09.595Z] 10000.83 IOPS, 39.07 MiB/s [2024-12-09T05:13:10.532Z] 10085.14 IOPS, 39.40 MiB/s [2024-12-09T05:13:11.469Z] 10003.88 IOPS, 39.08 MiB/s [2024-12-09T05:13:12.405Z] 9901.11 IOPS, 38.68 MiB/s [2024-12-09T05:13:12.405Z] 9802.40 IOPS, 38.29 MiB/s 01:18:29.949 Latency(us) 01:18:29.949 [2024-12-09T05:13:12.405Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:18:29.949 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 01:18:29.949 Verification LBA range: start 0x0 length 0x4000 01:18:29.949 NVMe0n1 : 10.07 9825.75 38.38 0.00 0.00 103730.46 20261.79 80589.25 01:18:29.949 [2024-12-09T05:13:12.405Z] =================================================================================================================== 01:18:29.949 [2024-12-09T05:13:12.405Z] Total : 9825.75 38.38 0.00 0.00 103730.46 20261.79 80589.25 01:18:29.949 { 01:18:29.949 "results": [ 01:18:29.949 { 01:18:29.949 "job": "NVMe0n1", 01:18:29.949 "core_mask": "0x1", 01:18:29.949 "workload": "verify", 01:18:29.949 "status": "finished", 01:18:29.949 "verify_range": { 01:18:29.949 "start": 0, 01:18:29.949 "length": 16384 01:18:29.949 }, 01:18:29.949 "queue_depth": 1024, 01:18:29.949 "io_size": 4096, 01:18:29.949 "runtime": 10.069872, 01:18:29.949 "iops": 9825.74555068823, 01:18:29.949 "mibps": 38.3818185573759, 01:18:29.949 "io_failed": 0, 01:18:29.949 "io_timeout": 0, 01:18:29.949 "avg_latency_us": 103730.45781591599, 01:18:29.949 "min_latency_us": 20261.785152838427, 01:18:29.949 "max_latency_us": 80589.24716157206 01:18:29.949 } 01:18:29.949 ], 01:18:29.949 "core_count": 1 01:18:29.949 } 01:18:29.949 05:13:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 64527 01:18:29.949 05:13:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 64527 ']' 01:18:29.949 05:13:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 64527 01:18:29.949 05:13:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 01:18:29.949 05:13:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:18:29.949 05:13:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64527 01:18:30.208 killing process with pid 64527 01:18:30.208 Received shutdown signal, test time was about 10.000000 seconds 01:18:30.208 01:18:30.208 Latency(us) 01:18:30.208 [2024-12-09T05:13:12.664Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:18:30.208 [2024-12-09T05:13:12.664Z] =================================================================================================================== 01:18:30.208 [2024-12-09T05:13:12.664Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:18:30.208 05:13:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:18:30.208 05:13:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:18:30.208 05:13:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64527' 01:18:30.208 05:13:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 64527 01:18:30.208 05:13:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 64527 01:18:30.208 05:13:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 01:18:30.208 05:13:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 01:18:30.208 05:13:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 01:18:30.208 05:13:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 01:18:30.467 05:13:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:18:30.467 05:13:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 01:18:30.467 05:13:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 01:18:30.467 05:13:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:18:30.467 rmmod nvme_tcp 01:18:30.467 rmmod nvme_fabrics 01:18:30.467 rmmod nvme_keyring 01:18:30.467 05:13:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:18:30.467 05:13:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 01:18:30.467 05:13:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 01:18:30.467 05:13:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 64495 ']' 01:18:30.467 05:13:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 64495 01:18:30.467 05:13:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 64495 ']' 01:18:30.467 05:13:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 64495 01:18:30.467 05:13:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 01:18:30.467 05:13:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:18:30.467 05:13:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64495 01:18:30.467 killing process with pid 64495 01:18:30.467 05:13:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:18:30.467 05:13:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:18:30.467 05:13:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64495' 01:18:30.467 05:13:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 64495 01:18:30.467 05:13:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 64495 01:18:30.724 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:18:30.724 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:18:30.724 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:18:30.724 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 01:18:30.724 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 01:18:30.724 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:18:30.724 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 01:18:30.724 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:18:30.724 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:18:30.724 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:18:30.724 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:18:30.725 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:18:30.725 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:18:30.725 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:18:30.725 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:18:30.725 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:18:30.725 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:18:30.725 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:18:30.982 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:18:30.982 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:18:30.982 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:18:30.982 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:18:30.982 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 01:18:30.982 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:18:30.982 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:18:30.982 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:18:30.982 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 01:18:30.982 01:18:30.982 real 0m14.094s 01:18:30.982 user 0m23.749s 01:18:30.982 sys 0m2.331s 01:18:30.982 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 01:18:30.982 ************************************ 01:18:30.982 END TEST nvmf_queue_depth 01:18:30.982 ************************************ 01:18:30.982 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:18:30.982 05:13:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 01:18:30.982 05:13:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:18:30.982 05:13:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 01:18:30.982 05:13:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 01:18:30.982 ************************************ 01:18:30.982 START TEST nvmf_target_multipath 01:18:30.982 ************************************ 01:18:30.982 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 01:18:31.279 * Looking for test storage... 01:18:31.279 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:18:31.279 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:18:31.279 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 01:18:31.279 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:18:31.279 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:18:31.279 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:18:31.279 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 01:18:31.279 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 01:18:31.279 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 01:18:31.279 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 01:18:31.279 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 01:18:31.279 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 01:18:31.279 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 01:18:31.279 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 01:18:31.279 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 01:18:31.279 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:18:31.279 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 01:18:31.279 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 01:18:31.279 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 01:18:31.279 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:18:31.279 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 01:18:31.279 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 01:18:31.279 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:18:31.279 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 01:18:31.279 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 01:18:31.279 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 01:18:31.279 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 01:18:31.279 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:18:31.279 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 01:18:31.279 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 01:18:31.279 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:18:31.279 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:18:31.279 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 01:18:31.279 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:18:31.279 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:18:31.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:18:31.279 --rc genhtml_branch_coverage=1 01:18:31.279 --rc genhtml_function_coverage=1 01:18:31.279 --rc genhtml_legend=1 01:18:31.279 --rc geninfo_all_blocks=1 01:18:31.279 --rc geninfo_unexecuted_blocks=1 01:18:31.279 01:18:31.279 ' 01:18:31.279 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:18:31.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:18:31.279 --rc genhtml_branch_coverage=1 01:18:31.279 --rc genhtml_function_coverage=1 01:18:31.279 --rc genhtml_legend=1 01:18:31.279 --rc geninfo_all_blocks=1 01:18:31.279 --rc geninfo_unexecuted_blocks=1 01:18:31.279 01:18:31.279 ' 01:18:31.279 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:18:31.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:18:31.279 --rc genhtml_branch_coverage=1 01:18:31.279 --rc genhtml_function_coverage=1 01:18:31.279 --rc genhtml_legend=1 01:18:31.279 --rc geninfo_all_blocks=1 01:18:31.279 --rc geninfo_unexecuted_blocks=1 01:18:31.279 01:18:31.279 ' 01:18:31.279 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:18:31.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:18:31.279 --rc genhtml_branch_coverage=1 01:18:31.279 --rc genhtml_function_coverage=1 01:18:31.279 --rc genhtml_legend=1 01:18:31.279 --rc geninfo_all_blocks=1 01:18:31.279 --rc geninfo_unexecuted_blocks=1 01:18:31.279 01:18:31.279 ' 01:18:31.279 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:18:31.280 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 01:18:31.280 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:18:31.280 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:18:31.280 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:18:31.280 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:18:31.280 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:18:31.280 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:18:31.280 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:18:31.280 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:18:31.280 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:18:31.280 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:18:31.280 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:18:31.280 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:18:31.280 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:18:31.280 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:18:31.280 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:18:31.280 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:18:31.280 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:18:31.280 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 01:18:31.280 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:18:31.280 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:18:31.280 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:18:31.280 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:18:31.280 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:18:31.280 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:18:31.280 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 01:18:31.280 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:18:31.280 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 01:18:31.280 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:18:31.280 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:18:31.280 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:18:31.280 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:18:31.280 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:18:31.280 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:18:31.280 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:18:31.280 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:18:31.280 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:18:31.280 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 01:18:31.280 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 01:18:31.280 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:18:31.280 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 01:18:31.280 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:18:31.280 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 01:18:31.280 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:18:31.280 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:18:31.280 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 01:18:31.280 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 01:18:31.280 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 01:18:31.280 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:18:31.280 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:18:31.280 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:18:31.280 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:18:31.280 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:18:31.280 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:18:31.280 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:18:31.280 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:18:31.280 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 01:18:31.280 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:18:31.280 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:18:31.280 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:18:31.280 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:18:31.280 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:18:31.280 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:18:31.280 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:18:31.280 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:18:31.280 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:18:31.280 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:18:31.280 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:18:31.280 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:18:31.280 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:18:31.280 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:18:31.280 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:18:31.280 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:18:31.280 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:18:31.280 Cannot find device "nvmf_init_br" 01:18:31.280 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 01:18:31.280 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:18:31.280 Cannot find device "nvmf_init_br2" 01:18:31.280 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 01:18:31.280 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:18:31.538 Cannot find device "nvmf_tgt_br" 01:18:31.538 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 01:18:31.538 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:18:31.538 Cannot find device "nvmf_tgt_br2" 01:18:31.538 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 01:18:31.538 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:18:31.538 Cannot find device "nvmf_init_br" 01:18:31.538 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 01:18:31.538 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:18:31.538 Cannot find device "nvmf_init_br2" 01:18:31.538 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 01:18:31.538 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:18:31.538 Cannot find device "nvmf_tgt_br" 01:18:31.538 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 01:18:31.539 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:18:31.539 Cannot find device "nvmf_tgt_br2" 01:18:31.539 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 01:18:31.539 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:18:31.539 Cannot find device "nvmf_br" 01:18:31.539 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 01:18:31.539 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:18:31.539 Cannot find device "nvmf_init_if" 01:18:31.539 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 01:18:31.539 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:18:31.539 Cannot find device "nvmf_init_if2" 01:18:31.539 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 01:18:31.539 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:18:31.539 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:18:31.539 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 01:18:31.539 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:18:31.539 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:18:31.539 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 01:18:31.539 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:18:31.539 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:18:31.539 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:18:31.539 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:18:31.539 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:18:31.539 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:18:31.539 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:18:31.539 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:18:31.539 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:18:31.539 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:18:31.539 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:18:31.539 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:18:31.539 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:18:31.539 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:18:31.539 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:18:31.539 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:18:31.539 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:18:31.539 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:18:31.539 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:18:31.539 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:18:31.539 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:18:31.539 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:18:31.539 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:18:31.539 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:18:31.797 05:13:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:18:31.797 05:13:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:18:31.797 05:13:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:18:31.797 05:13:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:18:31.797 05:13:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:18:31.797 05:13:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:18:31.797 05:13:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:18:31.797 05:13:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:18:31.797 05:13:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:18:31.797 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:18:31.797 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.121 ms 01:18:31.797 01:18:31.797 --- 10.0.0.3 ping statistics --- 01:18:31.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:18:31.797 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 01:18:31.797 05:13:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:18:31.797 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:18:31.797 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 01:18:31.797 01:18:31.797 --- 10.0.0.4 ping statistics --- 01:18:31.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:18:31.797 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 01:18:31.797 05:13:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:18:31.797 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:18:31.797 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 01:18:31.797 01:18:31.797 --- 10.0.0.1 ping statistics --- 01:18:31.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:18:31.797 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 01:18:31.797 05:13:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:18:31.797 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:18:31.797 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.100 ms 01:18:31.797 01:18:31.797 --- 10.0.0.2 ping statistics --- 01:18:31.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:18:31.797 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 01:18:31.797 05:13:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:18:31.797 05:13:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 01:18:31.797 05:13:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:18:31.797 05:13:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:18:31.797 05:13:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:18:31.797 05:13:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:18:31.797 05:13:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:18:31.797 05:13:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:18:31.797 05:13:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:18:31.797 05:13:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 01:18:31.797 05:13:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 01:18:31.797 05:13:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 01:18:31.797 05:13:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:18:31.797 05:13:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 01:18:31.797 05:13:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 01:18:31.797 05:13:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=64909 01:18:31.797 05:13:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 64909 01:18:31.797 05:13:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 01:18:31.797 05:13:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 64909 ']' 01:18:31.797 05:13:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:18:31.797 05:13:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 01:18:31.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:18:31.797 05:13:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:18:31.797 05:13:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 01:18:31.797 05:13:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 01:18:31.798 [2024-12-09 05:13:14.117575] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:18:31.798 [2024-12-09 05:13:14.117665] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:18:32.057 [2024-12-09 05:13:14.273919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:18:32.057 [2024-12-09 05:13:14.328562] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:18:32.057 [2024-12-09 05:13:14.328616] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:18:32.057 [2024-12-09 05:13:14.328622] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:18:32.057 [2024-12-09 05:13:14.328627] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:18:32.057 [2024-12-09 05:13:14.328631] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:18:32.057 [2024-12-09 05:13:14.329558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:18:32.057 [2024-12-09 05:13:14.329613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:18:32.057 [2024-12-09 05:13:14.329988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:18:32.057 [2024-12-09 05:13:14.329989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:18:32.057 [2024-12-09 05:13:14.371266] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:18:32.624 05:13:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:18:32.624 05:13:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 01:18:32.624 05:13:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:18:32.624 05:13:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 01:18:32.624 05:13:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 01:18:32.624 05:13:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:18:32.624 05:13:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 01:18:32.882 [2024-12-09 05:13:15.302126] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:18:32.882 05:13:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 01:18:33.139 Malloc0 01:18:33.397 05:13:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 01:18:33.397 05:13:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:18:33.656 05:13:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:18:33.916 [2024-12-09 05:13:16.304404] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:18:33.916 05:13:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 01:18:34.175 [2024-12-09 05:13:16.564130] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 01:18:34.175 05:13:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid=0567fff2-ddaf-4a4f-877d-a2600d7e662b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 01:18:34.440 05:13:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid=0567fff2-ddaf-4a4f-877d-a2600d7e662b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 01:18:34.440 05:13:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 01:18:34.440 05:13:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 01:18:34.440 05:13:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 01:18:34.440 05:13:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 01:18:34.440 05:13:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 01:18:36.987 05:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 01:18:36.988 05:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 01:18:36.988 05:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 01:18:36.988 05:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 01:18:36.988 05:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 01:18:36.988 05:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 01:18:36.988 05:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 01:18:36.988 05:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 01:18:36.988 05:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 01:18:36.988 05:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 01:18:36.988 05:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 01:18:36.988 05:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 01:18:36.988 05:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 01:18:36.988 05:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 01:18:36.988 05:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 01:18:36.988 05:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 01:18:36.988 05:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 01:18:36.988 05:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 01:18:36.988 05:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 01:18:36.988 05:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 01:18:36.988 05:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 01:18:36.988 05:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:18:36.988 05:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 01:18:36.988 05:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 01:18:36.988 05:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 01:18:36.988 05:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 01:18:36.988 05:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 01:18:36.988 05:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:18:36.988 05:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 01:18:36.988 05:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 01:18:36.988 05:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 01:18:36.988 05:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 01:18:36.988 05:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=65003 01:18:36.988 05:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 01:18:36.988 05:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 01:18:36.988 [global] 01:18:36.988 thread=1 01:18:36.988 invalidate=1 01:18:36.988 rw=randrw 01:18:36.988 time_based=1 01:18:36.988 runtime=6 01:18:36.988 ioengine=libaio 01:18:36.988 direct=1 01:18:36.988 bs=4096 01:18:36.988 iodepth=128 01:18:36.988 norandommap=0 01:18:36.988 numjobs=1 01:18:36.988 01:18:36.988 verify_dump=1 01:18:36.988 verify_backlog=512 01:18:36.988 verify_state_save=0 01:18:36.988 do_verify=1 01:18:36.988 verify=crc32c-intel 01:18:36.988 [job0] 01:18:36.988 filename=/dev/nvme0n1 01:18:36.988 Could not set queue depth (nvme0n1) 01:18:36.988 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:18:36.988 fio-3.35 01:18:36.988 Starting 1 thread 01:18:37.557 05:13:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 01:18:37.817 05:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 01:18:38.077 05:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 01:18:38.077 05:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 01:18:38.077 05:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:18:38.077 05:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 01:18:38.077 05:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 01:18:38.077 05:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 01:18:38.077 05:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 01:18:38.077 05:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 01:18:38.077 05:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:18:38.077 05:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 01:18:38.077 05:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 01:18:38.077 05:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 01:18:38.077 05:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 01:18:38.337 05:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 01:18:38.596 05:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 01:18:38.596 05:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 01:18:38.596 05:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:18:38.596 05:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 01:18:38.596 05:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 01:18:38.596 05:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 01:18:38.596 05:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 01:18:38.596 05:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 01:18:38.596 05:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:18:38.596 05:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 01:18:38.596 05:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 01:18:38.596 05:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 01:18:38.596 05:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 65003 01:18:43.890 01:18:43.890 job0: (groupid=0, jobs=1): err= 0: pid=65025: Mon Dec 9 05:13:25 2024 01:18:43.890 read: IOPS=12.5k, BW=48.7MiB/s (51.1MB/s)(292MiB/6002msec) 01:18:43.890 slat (usec): min=3, max=5300, avg=44.94, stdev=158.78 01:18:43.890 clat (usec): min=1448, max=15412, avg=7047.72, stdev=1245.76 01:18:43.890 lat (usec): min=1469, max=15422, avg=7092.67, stdev=1250.66 01:18:43.890 clat percentiles (usec): 01:18:43.890 | 1.00th=[ 4228], 5.00th=[ 5276], 10.00th=[ 5932], 20.00th=[ 6390], 01:18:43.890 | 30.00th=[ 6587], 40.00th=[ 6718], 50.00th=[ 6849], 60.00th=[ 7046], 01:18:43.890 | 70.00th=[ 7308], 80.00th=[ 7570], 90.00th=[ 8225], 95.00th=[ 9896], 01:18:43.890 | 99.00th=[11338], 99.50th=[11600], 99.90th=[12649], 99.95th=[14353], 01:18:43.890 | 99.99th=[15401] 01:18:43.890 bw ( KiB/s): min=16024, max=35256, per=52.69%, avg=26290.91, stdev=6020.06, samples=11 01:18:43.890 iops : min= 4006, max= 8814, avg=6572.73, stdev=1505.01, samples=11 01:18:43.890 write: IOPS=7326, BW=28.6MiB/s (30.0MB/s)(149MiB/5205msec); 0 zone resets 01:18:43.890 slat (usec): min=5, max=5836, avg=57.60, stdev=107.59 01:18:43.890 clat (usec): min=1158, max=15022, avg=6114.31, stdev=1067.48 01:18:43.890 lat (usec): min=1212, max=15049, avg=6171.91, stdev=1071.50 01:18:43.890 clat percentiles (usec): 01:18:43.890 | 1.00th=[ 3687], 5.00th=[ 4293], 10.00th=[ 4686], 20.00th=[ 5407], 01:18:43.890 | 30.00th=[ 5735], 40.00th=[ 5997], 50.00th=[ 6128], 60.00th=[ 6325], 01:18:43.890 | 70.00th=[ 6587], 80.00th=[ 6783], 90.00th=[ 7177], 95.00th=[ 7504], 01:18:43.890 | 99.00th=[ 9503], 99.50th=[10421], 99.90th=[13960], 99.95th=[14615], 01:18:43.890 | 99.99th=[15008] 01:18:43.890 bw ( KiB/s): min=16384, max=34560, per=89.55%, avg=26245.09, stdev=5652.07, samples=11 01:18:43.890 iops : min= 4096, max= 8640, avg=6561.27, stdev=1413.02, samples=11 01:18:43.890 lat (msec) : 2=0.06%, 4=1.16%, 10=95.51%, 20=3.27% 01:18:43.890 cpu : usr=6.82%, sys=31.21%, ctx=6911, majf=0, minf=90 01:18:43.890 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 01:18:43.890 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:18:43.890 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:18:43.890 issued rwts: total=74862,38135,0,0 short=0,0,0,0 dropped=0,0,0,0 01:18:43.890 latency : target=0, window=0, percentile=100.00%, depth=128 01:18:43.890 01:18:43.890 Run status group 0 (all jobs): 01:18:43.890 READ: bw=48.7MiB/s (51.1MB/s), 48.7MiB/s-48.7MiB/s (51.1MB/s-51.1MB/s), io=292MiB (307MB), run=6002-6002msec 01:18:43.891 WRITE: bw=28.6MiB/s (30.0MB/s), 28.6MiB/s-28.6MiB/s (30.0MB/s-30.0MB/s), io=149MiB (156MB), run=5205-5205msec 01:18:43.891 01:18:43.891 Disk stats (read/write): 01:18:43.891 nvme0n1: ios=73121/38135, merge=0/0, ticks=479737/209693, in_queue=689430, util=98.68% 01:18:43.891 05:13:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 01:18:43.891 05:13:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 01:18:43.891 05:13:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 01:18:43.891 05:13:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 01:18:43.891 05:13:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:18:43.891 05:13:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 01:18:43.891 05:13:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 01:18:43.891 05:13:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 01:18:43.891 05:13:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 01:18:43.891 05:13:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 01:18:43.891 05:13:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:18:43.891 05:13:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 01:18:43.891 05:13:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 01:18:43.891 05:13:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 01:18:43.891 05:13:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 01:18:43.891 05:13:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=65102 01:18:43.891 05:13:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 01:18:43.891 05:13:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 01:18:43.891 [global] 01:18:43.891 thread=1 01:18:43.891 invalidate=1 01:18:43.891 rw=randrw 01:18:43.891 time_based=1 01:18:43.891 runtime=6 01:18:43.891 ioengine=libaio 01:18:43.891 direct=1 01:18:43.891 bs=4096 01:18:43.891 iodepth=128 01:18:43.891 norandommap=0 01:18:43.891 numjobs=1 01:18:43.891 01:18:43.891 verify_dump=1 01:18:43.891 verify_backlog=512 01:18:43.891 verify_state_save=0 01:18:43.891 do_verify=1 01:18:43.891 verify=crc32c-intel 01:18:43.891 [job0] 01:18:43.891 filename=/dev/nvme0n1 01:18:43.891 Could not set queue depth (nvme0n1) 01:18:43.891 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:18:43.891 fio-3.35 01:18:43.891 Starting 1 thread 01:18:44.477 05:13:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 01:18:44.736 05:13:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 01:18:44.996 05:13:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 01:18:44.996 05:13:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 01:18:44.996 05:13:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:18:44.996 05:13:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 01:18:44.996 05:13:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 01:18:44.996 05:13:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 01:18:44.996 05:13:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 01:18:44.996 05:13:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 01:18:44.996 05:13:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:18:44.996 05:13:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 01:18:44.996 05:13:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 01:18:44.996 05:13:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 01:18:44.996 05:13:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 01:18:44.996 05:13:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 01:18:45.256 05:13:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 01:18:45.256 05:13:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 01:18:45.256 05:13:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:18:45.256 05:13:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 01:18:45.256 05:13:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 01:18:45.256 05:13:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 01:18:45.256 05:13:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 01:18:45.256 05:13:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 01:18:45.256 05:13:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:18:45.256 05:13:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 01:18:45.256 05:13:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 01:18:45.256 05:13:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 01:18:45.256 05:13:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 65102 01:18:50.556 01:18:50.556 job0: (groupid=0, jobs=1): err= 0: pid=65123: Mon Dec 9 05:13:32 2024 01:18:50.556 read: IOPS=12.6k, BW=49.0MiB/s (51.4MB/s)(294MiB/6002msec) 01:18:50.556 slat (nsec): min=1452, max=6426.4k, avg=39717.38, stdev=162928.70 01:18:50.556 clat (usec): min=298, max=18181, avg=7072.02, stdev=1830.49 01:18:50.556 lat (usec): min=308, max=18195, avg=7111.74, stdev=1843.58 01:18:50.556 clat percentiles (usec): 01:18:50.556 | 1.00th=[ 3228], 5.00th=[ 4293], 10.00th=[ 4883], 20.00th=[ 5538], 01:18:50.556 | 30.00th=[ 6325], 40.00th=[ 6783], 50.00th=[ 7111], 60.00th=[ 7439], 01:18:50.556 | 70.00th=[ 7701], 80.00th=[ 8029], 90.00th=[ 8848], 95.00th=[10683], 01:18:50.556 | 99.00th=[12911], 99.50th=[13698], 99.90th=[16057], 99.95th=[16909], 01:18:50.556 | 99.99th=[17957] 01:18:50.556 bw ( KiB/s): min=17792, max=41216, per=53.08%, avg=26656.73, stdev=7060.42, samples=11 01:18:50.556 iops : min= 4448, max=10304, avg=6664.18, stdev=1765.11, samples=11 01:18:50.556 write: IOPS=7207, BW=28.2MiB/s (29.5MB/s)(148MiB/5252msec); 0 zone resets 01:18:50.556 slat (usec): min=2, max=4220, avg=54.74, stdev=111.45 01:18:50.556 clat (usec): min=252, max=18108, avg=6034.33, stdev=1775.32 01:18:50.556 lat (usec): min=317, max=18143, avg=6089.07, stdev=1789.82 01:18:50.556 clat percentiles (usec): 01:18:50.556 | 1.00th=[ 2671], 5.00th=[ 3326], 10.00th=[ 3752], 20.00th=[ 4359], 01:18:50.556 | 30.00th=[ 5014], 40.00th=[ 5800], 50.00th=[ 6194], 60.00th=[ 6521], 01:18:50.556 | 70.00th=[ 6849], 80.00th=[ 7177], 90.00th=[ 7767], 95.00th=[ 8848], 01:18:50.556 | 99.00th=[11863], 99.50th=[12518], 99.90th=[13698], 99.95th=[14746], 01:18:50.556 | 99.99th=[16450] 01:18:50.556 bw ( KiB/s): min=18752, max=40304, per=92.28%, avg=26605.82, stdev=6805.40, samples=11 01:18:50.556 iops : min= 4688, max=10076, avg=6651.45, stdev=1701.35, samples=11 01:18:50.556 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 01:18:50.556 lat (msec) : 2=0.15%, 4=6.47%, 10=87.86%, 20=5.50% 01:18:50.556 cpu : usr=6.03%, sys=29.44%, ctx=7891, majf=0, minf=78 01:18:50.556 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 01:18:50.556 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:18:50.556 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:18:50.556 issued rwts: total=75355,37854,0,0 short=0,0,0,0 dropped=0,0,0,0 01:18:50.556 latency : target=0, window=0, percentile=100.00%, depth=128 01:18:50.556 01:18:50.556 Run status group 0 (all jobs): 01:18:50.556 READ: bw=49.0MiB/s (51.4MB/s), 49.0MiB/s-49.0MiB/s (51.4MB/s-51.4MB/s), io=294MiB (309MB), run=6002-6002msec 01:18:50.556 WRITE: bw=28.2MiB/s (29.5MB/s), 28.2MiB/s-28.2MiB/s (29.5MB/s-29.5MB/s), io=148MiB (155MB), run=5252-5252msec 01:18:50.556 01:18:50.556 Disk stats (read/write): 01:18:50.556 nvme0n1: ios=74378/37342, merge=0/0, ticks=481263/196534, in_queue=677797, util=98.68% 01:18:50.556 05:13:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 01:18:50.556 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 01:18:50.556 05:13:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 01:18:50.556 05:13:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 01:18:50.556 05:13:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 01:18:50.556 05:13:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 01:18:50.556 05:13:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 01:18:50.556 05:13:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 01:18:50.556 05:13:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 01:18:50.556 05:13:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:18:50.556 05:13:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 01:18:50.556 05:13:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 01:18:50.556 05:13:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 01:18:50.556 05:13:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 01:18:50.556 05:13:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 01:18:50.556 05:13:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 01:18:50.556 05:13:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:18:50.556 05:13:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 01:18:50.556 05:13:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 01:18:50.556 05:13:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:18:50.556 rmmod nvme_tcp 01:18:50.556 rmmod nvme_fabrics 01:18:50.556 rmmod nvme_keyring 01:18:50.556 05:13:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:18:50.556 05:13:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 01:18:50.556 05:13:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 01:18:50.556 05:13:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 64909 ']' 01:18:50.556 05:13:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 64909 01:18:50.556 05:13:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 64909 ']' 01:18:50.556 05:13:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 64909 01:18:50.557 05:13:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 01:18:50.557 05:13:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:18:50.557 05:13:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64909 01:18:50.557 05:13:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:18:50.557 05:13:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:18:50.557 killing process with pid 64909 01:18:50.557 05:13:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64909' 01:18:50.557 05:13:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 64909 01:18:50.557 05:13:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 64909 01:18:50.557 05:13:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:18:50.557 05:13:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:18:50.557 05:13:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:18:50.557 05:13:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 01:18:50.557 05:13:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 01:18:50.557 05:13:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:18:50.557 05:13:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 01:18:50.557 05:13:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:18:50.557 05:13:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:18:50.557 05:13:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:18:50.557 05:13:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:18:50.557 05:13:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:18:50.557 05:13:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:18:50.557 05:13:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:18:50.557 05:13:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:18:50.557 05:13:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:18:50.557 05:13:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:18:50.557 05:13:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:18:50.557 05:13:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:18:50.557 05:13:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:18:50.557 05:13:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:18:50.557 05:13:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:18:50.557 05:13:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 01:18:50.557 05:13:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:18:50.557 05:13:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:18:50.557 05:13:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:18:50.815 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 01:18:50.815 01:18:50.815 real 0m19.611s 01:18:50.815 user 1m13.613s 01:18:50.815 sys 0m9.490s 01:18:50.815 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 01:18:50.815 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 01:18:50.815 ************************************ 01:18:50.815 END TEST nvmf_target_multipath 01:18:50.815 ************************************ 01:18:50.815 05:13:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 01:18:50.815 05:13:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:18:50.815 05:13:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 01:18:50.815 05:13:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 01:18:50.815 ************************************ 01:18:50.815 START TEST nvmf_zcopy 01:18:50.815 ************************************ 01:18:50.815 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 01:18:50.815 * Looking for test storage... 01:18:50.815 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:18:50.815 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:18:50.815 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 01:18:50.815 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:18:50.815 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:18:50.815 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:18:50.815 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 01:18:50.815 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 01:18:50.815 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 01:18:50.815 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 01:18:50.815 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 01:18:50.815 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 01:18:50.815 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 01:18:50.815 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 01:18:50.815 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 01:18:50.815 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:18:50.815 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 01:18:50.815 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 01:18:50.815 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 01:18:50.815 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:18:50.815 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 01:18:50.815 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 01:18:50.815 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:18:50.815 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 01:18:50.815 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 01:18:50.815 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 01:18:50.815 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 01:18:50.815 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:18:50.815 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 01:18:50.815 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 01:18:50.815 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:18:50.815 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:18:50.815 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 01:18:50.816 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:18:50.816 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:18:50.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:18:50.816 --rc genhtml_branch_coverage=1 01:18:50.816 --rc genhtml_function_coverage=1 01:18:50.816 --rc genhtml_legend=1 01:18:50.816 --rc geninfo_all_blocks=1 01:18:50.816 --rc geninfo_unexecuted_blocks=1 01:18:50.816 01:18:50.816 ' 01:18:50.816 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:18:50.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:18:50.816 --rc genhtml_branch_coverage=1 01:18:50.816 --rc genhtml_function_coverage=1 01:18:50.816 --rc genhtml_legend=1 01:18:50.816 --rc geninfo_all_blocks=1 01:18:50.816 --rc geninfo_unexecuted_blocks=1 01:18:50.816 01:18:50.816 ' 01:18:50.816 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:18:50.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:18:50.816 --rc genhtml_branch_coverage=1 01:18:50.816 --rc genhtml_function_coverage=1 01:18:50.816 --rc genhtml_legend=1 01:18:50.816 --rc geninfo_all_blocks=1 01:18:50.816 --rc geninfo_unexecuted_blocks=1 01:18:50.816 01:18:50.816 ' 01:18:50.816 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:18:50.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:18:50.816 --rc genhtml_branch_coverage=1 01:18:50.816 --rc genhtml_function_coverage=1 01:18:50.816 --rc genhtml_legend=1 01:18:50.816 --rc geninfo_all_blocks=1 01:18:50.816 --rc geninfo_unexecuted_blocks=1 01:18:50.816 01:18:50.816 ' 01:18:50.816 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:18:51.075 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 01:18:51.075 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:18:51.075 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:18:51.075 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:18:51.075 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:18:51.075 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:18:51.075 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:18:51.075 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:18:51.075 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:18:51.075 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:18:51.075 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:18:51.075 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:18:51.075 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:18:51.075 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:18:51.075 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:18:51.075 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:18:51.075 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:18:51.075 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:18:51.075 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 01:18:51.075 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:18:51.075 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:18:51.075 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:18:51.075 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:18:51.075 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:18:51.075 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:18:51.075 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 01:18:51.075 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:18:51.075 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 01:18:51.075 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:18:51.075 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:18:51.075 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:18:51.075 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:18:51.075 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:18:51.075 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:18:51.075 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:18:51.075 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:18:51.075 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:18:51.075 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 01:18:51.075 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 01:18:51.075 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:18:51.075 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:18:51.075 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 01:18:51.075 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 01:18:51.075 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 01:18:51.075 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:18:51.075 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:18:51.075 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:18:51.075 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:18:51.075 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:18:51.075 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:18:51.075 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:18:51.075 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:18:51.075 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 01:18:51.075 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:18:51.075 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:18:51.075 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:18:51.075 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:18:51.075 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:18:51.075 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:18:51.075 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:18:51.075 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:18:51.075 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:18:51.075 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:18:51.075 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:18:51.075 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:18:51.075 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:18:51.075 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:18:51.075 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:18:51.075 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:18:51.075 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:18:51.076 Cannot find device "nvmf_init_br" 01:18:51.076 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 01:18:51.076 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:18:51.076 Cannot find device "nvmf_init_br2" 01:18:51.076 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 01:18:51.076 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:18:51.076 Cannot find device "nvmf_tgt_br" 01:18:51.076 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 01:18:51.076 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:18:51.076 Cannot find device "nvmf_tgt_br2" 01:18:51.076 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 01:18:51.076 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:18:51.076 Cannot find device "nvmf_init_br" 01:18:51.076 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 01:18:51.076 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:18:51.076 Cannot find device "nvmf_init_br2" 01:18:51.076 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 01:18:51.076 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:18:51.076 Cannot find device "nvmf_tgt_br" 01:18:51.076 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 01:18:51.076 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:18:51.076 Cannot find device "nvmf_tgt_br2" 01:18:51.076 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 01:18:51.076 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:18:51.076 Cannot find device "nvmf_br" 01:18:51.076 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 01:18:51.076 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:18:51.076 Cannot find device "nvmf_init_if" 01:18:51.076 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 01:18:51.076 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:18:51.076 Cannot find device "nvmf_init_if2" 01:18:51.076 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 01:18:51.076 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:18:51.076 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:18:51.076 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 01:18:51.076 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:18:51.076 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:18:51.076 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 01:18:51.076 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:18:51.076 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:18:51.076 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:18:51.076 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:18:51.076 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:18:51.076 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:18:51.334 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:18:51.334 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:18:51.334 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:18:51.334 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:18:51.334 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:18:51.334 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:18:51.334 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:18:51.334 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:18:51.334 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:18:51.334 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:18:51.334 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:18:51.334 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:18:51.334 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:18:51.334 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:18:51.334 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:18:51.334 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:18:51.334 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:18:51.334 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:18:51.334 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:18:51.334 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:18:51.334 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:18:51.334 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:18:51.334 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:18:51.334 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:18:51.334 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:18:51.334 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:18:51.334 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:18:51.334 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:18:51.334 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.100 ms 01:18:51.334 01:18:51.334 --- 10.0.0.3 ping statistics --- 01:18:51.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:18:51.334 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 01:18:51.334 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:18:51.334 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:18:51.334 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.127 ms 01:18:51.334 01:18:51.334 --- 10.0.0.4 ping statistics --- 01:18:51.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:18:51.334 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 01:18:51.334 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:18:51.334 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:18:51.334 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 01:18:51.334 01:18:51.334 --- 10.0.0.1 ping statistics --- 01:18:51.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:18:51.334 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 01:18:51.334 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:18:51.334 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:18:51.334 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 01:18:51.334 01:18:51.334 --- 10.0.0.2 ping statistics --- 01:18:51.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:18:51.334 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 01:18:51.334 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:18:51.334 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 01:18:51.334 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:18:51.334 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:18:51.334 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:18:51.334 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:18:51.334 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:18:51.334 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:18:51.334 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:18:51.334 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 01:18:51.334 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:18:51.334 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 01:18:51.334 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:18:51.334 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 01:18:51.334 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=65426 01:18:51.334 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 65426 01:18:51.334 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 65426 ']' 01:18:51.334 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:18:51.334 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 01:18:51.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:18:51.334 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:18:51.334 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 01:18:51.334 05:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:18:51.334 [2024-12-09 05:13:33.754811] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:18:51.334 [2024-12-09 05:13:33.754881] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:18:51.591 [2024-12-09 05:13:33.891646] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:18:51.591 [2024-12-09 05:13:33.948874] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:18:51.591 [2024-12-09 05:13:33.948910] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:18:51.591 [2024-12-09 05:13:33.948917] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:18:51.591 [2024-12-09 05:13:33.948921] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:18:51.591 [2024-12-09 05:13:33.948926] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:18:51.591 [2024-12-09 05:13:33.949218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:18:51.591 [2024-12-09 05:13:33.994076] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:18:51.848 05:13:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:18:51.848 05:13:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 01:18:51.848 05:13:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:18:51.849 05:13:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 01:18:51.849 05:13:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:18:51.849 05:13:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:18:51.849 05:13:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 01:18:51.849 05:13:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 01:18:51.849 05:13:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 01:18:51.849 05:13:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:18:51.849 [2024-12-09 05:13:34.094652] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:18:51.849 05:13:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:18:51.849 05:13:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 01:18:51.849 05:13:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 01:18:51.849 05:13:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:18:51.849 05:13:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:18:51.849 05:13:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:18:51.849 05:13:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 01:18:51.849 05:13:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:18:51.849 [2024-12-09 05:13:34.118680] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:18:51.849 05:13:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:18:51.849 05:13:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 01:18:51.849 05:13:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 01:18:51.849 05:13:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:18:51.849 05:13:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:18:51.849 05:13:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 01:18:51.849 05:13:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 01:18:51.849 05:13:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:18:51.849 malloc0 01:18:51.849 05:13:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:18:51.849 05:13:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 01:18:51.849 05:13:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 01:18:51.849 05:13:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:18:51.849 05:13:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:18:51.849 05:13:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 01:18:51.849 05:13:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 01:18:51.849 05:13:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 01:18:51.849 05:13:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 01:18:51.849 05:13:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:18:51.849 05:13:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:18:51.849 { 01:18:51.849 "params": { 01:18:51.849 "name": "Nvme$subsystem", 01:18:51.849 "trtype": "$TEST_TRANSPORT", 01:18:51.849 "traddr": "$NVMF_FIRST_TARGET_IP", 01:18:51.849 "adrfam": "ipv4", 01:18:51.849 "trsvcid": "$NVMF_PORT", 01:18:51.849 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:18:51.849 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:18:51.849 "hdgst": ${hdgst:-false}, 01:18:51.849 "ddgst": ${ddgst:-false} 01:18:51.849 }, 01:18:51.849 "method": "bdev_nvme_attach_controller" 01:18:51.849 } 01:18:51.849 EOF 01:18:51.849 )") 01:18:51.849 05:13:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 01:18:51.849 05:13:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 01:18:51.849 05:13:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 01:18:51.849 05:13:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:18:51.849 "params": { 01:18:51.849 "name": "Nvme1", 01:18:51.849 "trtype": "tcp", 01:18:51.849 "traddr": "10.0.0.3", 01:18:51.849 "adrfam": "ipv4", 01:18:51.849 "trsvcid": "4420", 01:18:51.849 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:18:51.849 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:18:51.849 "hdgst": false, 01:18:51.849 "ddgst": false 01:18:51.849 }, 01:18:51.849 "method": "bdev_nvme_attach_controller" 01:18:51.849 }' 01:18:51.849 [2024-12-09 05:13:34.215552] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:18:51.849 [2024-12-09 05:13:34.215627] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65451 ] 01:18:52.106 [2024-12-09 05:13:34.359376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:18:52.106 [2024-12-09 05:13:34.429190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:18:52.106 [2024-12-09 05:13:34.481562] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:18:52.362 Running I/O for 10 seconds... 01:18:54.254 6406.00 IOPS, 50.05 MiB/s [2024-12-09T05:13:37.649Z] 7307.50 IOPS, 57.09 MiB/s [2024-12-09T05:13:39.029Z] 7647.67 IOPS, 59.75 MiB/s [2024-12-09T05:13:39.598Z] 7718.75 IOPS, 60.30 MiB/s [2024-12-09T05:13:40.980Z] 7698.60 IOPS, 60.15 MiB/s [2024-12-09T05:13:41.916Z] 7807.00 IOPS, 60.99 MiB/s [2024-12-09T05:13:42.853Z] 7887.43 IOPS, 61.62 MiB/s [2024-12-09T05:13:43.788Z] 7963.00 IOPS, 62.21 MiB/s [2024-12-09T05:13:44.735Z] 8000.67 IOPS, 62.51 MiB/s [2024-12-09T05:13:44.735Z] 8015.40 IOPS, 62.62 MiB/s 01:19:02.279 Latency(us) 01:19:02.279 [2024-12-09T05:13:44.735Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:19:02.279 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 01:19:02.279 Verification LBA range: start 0x0 length 0x1000 01:19:02.279 Nvme1n1 : 10.01 8016.73 62.63 0.00 0.00 15920.26 2117.76 27817.03 01:19:02.279 [2024-12-09T05:13:44.735Z] =================================================================================================================== 01:19:02.279 [2024-12-09T05:13:44.735Z] Total : 8016.73 62.63 0.00 0.00 15920.26 2117.76 27817.03 01:19:02.538 05:13:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=65574 01:19:02.538 05:13:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 01:19:02.538 05:13:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:19:02.538 05:13:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 01:19:02.538 05:13:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 01:19:02.538 05:13:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 01:19:02.538 05:13:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 01:19:02.538 05:13:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:19:02.538 05:13:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:19:02.538 { 01:19:02.538 "params": { 01:19:02.538 "name": "Nvme$subsystem", 01:19:02.538 "trtype": "$TEST_TRANSPORT", 01:19:02.538 "traddr": "$NVMF_FIRST_TARGET_IP", 01:19:02.538 "adrfam": "ipv4", 01:19:02.538 "trsvcid": "$NVMF_PORT", 01:19:02.538 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:19:02.538 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:19:02.538 "hdgst": ${hdgst:-false}, 01:19:02.538 "ddgst": ${ddgst:-false} 01:19:02.538 }, 01:19:02.538 "method": "bdev_nvme_attach_controller" 01:19:02.538 } 01:19:02.538 EOF 01:19:02.538 )") 01:19:02.538 [2024-12-09 05:13:44.808771] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:02.538 [2024-12-09 05:13:44.808806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:02.538 05:13:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 01:19:02.538 05:13:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 01:19:02.538 05:13:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 01:19:02.538 05:13:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:19:02.538 "params": { 01:19:02.538 "name": "Nvme1", 01:19:02.538 "trtype": "tcp", 01:19:02.538 "traddr": "10.0.0.3", 01:19:02.538 "adrfam": "ipv4", 01:19:02.538 "trsvcid": "4420", 01:19:02.538 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:19:02.538 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:19:02.538 "hdgst": false, 01:19:02.538 "ddgst": false 01:19:02.538 }, 01:19:02.538 "method": "bdev_nvme_attach_controller" 01:19:02.538 }' 01:19:02.538 [2024-12-09 05:13:44.820720] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:02.538 [2024-12-09 05:13:44.820741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:02.538 [2024-12-09 05:13:44.832691] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:02.538 [2024-12-09 05:13:44.832711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:02.538 [2024-12-09 05:13:44.844663] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:02.538 [2024-12-09 05:13:44.844681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:02.538 [2024-12-09 05:13:44.854834] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:19:02.538 [2024-12-09 05:13:44.854888] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65574 ] 01:19:02.538 [2024-12-09 05:13:44.856644] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:02.538 [2024-12-09 05:13:44.856661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:02.538 [2024-12-09 05:13:44.872624] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:02.538 [2024-12-09 05:13:44.872647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:02.538 [2024-12-09 05:13:44.884595] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:02.538 [2024-12-09 05:13:44.884626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:02.538 [2024-12-09 05:13:44.896574] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:02.538 [2024-12-09 05:13:44.896594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:02.538 [2024-12-09 05:13:44.908552] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:02.538 [2024-12-09 05:13:44.908572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:02.538 [2024-12-09 05:13:44.924527] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:02.538 [2024-12-09 05:13:44.924548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:02.538 [2024-12-09 05:13:44.936511] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:02.538 [2024-12-09 05:13:44.936533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:02.538 [2024-12-09 05:13:44.948484] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:02.538 [2024-12-09 05:13:44.948506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:02.538 [2024-12-09 05:13:44.960465] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:02.538 [2024-12-09 05:13:44.960486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:02.538 [2024-12-09 05:13:44.972447] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:02.538 [2024-12-09 05:13:44.972468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:02.538 [2024-12-09 05:13:44.984426] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:02.538 [2024-12-09 05:13:44.984445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:02.800 [2024-12-09 05:13:44.996418] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:02.800 [2024-12-09 05:13:44.996442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:02.800 [2024-12-09 05:13:45.006020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:19:02.800 [2024-12-09 05:13:45.008399] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:02.800 [2024-12-09 05:13:45.008416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:02.800 [2024-12-09 05:13:45.020377] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:02.800 [2024-12-09 05:13:45.020408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:02.800 [2024-12-09 05:13:45.032350] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:02.800 [2024-12-09 05:13:45.032369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:02.800 [2024-12-09 05:13:45.044341] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:02.800 [2024-12-09 05:13:45.044363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:02.800 [2024-12-09 05:13:45.060307] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:02.800 [2024-12-09 05:13:45.060336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:02.800 [2024-12-09 05:13:45.061378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:19:02.800 [2024-12-09 05:13:45.072302] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:02.800 [2024-12-09 05:13:45.072338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:02.800 [2024-12-09 05:13:45.084289] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:02.800 [2024-12-09 05:13:45.084317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:02.800 [2024-12-09 05:13:45.096261] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:02.800 [2024-12-09 05:13:45.096299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:02.800 [2024-12-09 05:13:45.108249] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:02.800 [2024-12-09 05:13:45.108276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:02.800 [2024-12-09 05:13:45.120218] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:02.800 [2024-12-09 05:13:45.120246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:02.800 [2024-12-09 05:13:45.121477] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:19:02.800 [2024-12-09 05:13:45.132202] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:02.800 [2024-12-09 05:13:45.132237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:02.800 [2024-12-09 05:13:45.144180] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:02.800 [2024-12-09 05:13:45.144206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:02.800 [2024-12-09 05:13:45.156153] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:02.800 [2024-12-09 05:13:45.156175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:02.800 [2024-12-09 05:13:45.168145] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:02.800 [2024-12-09 05:13:45.168176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:02.800 [2024-12-09 05:13:45.180138] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:02.800 [2024-12-09 05:13:45.180171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:02.800 [2024-12-09 05:13:45.192109] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:02.800 [2024-12-09 05:13:45.192139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:02.800 [2024-12-09 05:13:45.204104] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:02.800 [2024-12-09 05:13:45.204137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:02.800 [2024-12-09 05:13:45.216090] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:02.800 [2024-12-09 05:13:45.216120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:02.800 [2024-12-09 05:13:45.228071] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:02.800 [2024-12-09 05:13:45.228106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:02.800 Running I/O for 5 seconds... 01:19:02.800 [2024-12-09 05:13:45.244054] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:02.800 [2024-12-09 05:13:45.244084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:03.060 [2024-12-09 05:13:45.259795] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:03.060 [2024-12-09 05:13:45.259834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:03.060 [2024-12-09 05:13:45.275175] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:03.060 [2024-12-09 05:13:45.275210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:03.060 [2024-12-09 05:13:45.292707] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:03.060 [2024-12-09 05:13:45.292749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:03.060 [2024-12-09 05:13:45.309015] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:03.060 [2024-12-09 05:13:45.309052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:03.060 [2024-12-09 05:13:45.324683] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:03.060 [2024-12-09 05:13:45.324720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:03.060 [2024-12-09 05:13:45.339296] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:03.060 [2024-12-09 05:13:45.339345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:03.060 [2024-12-09 05:13:45.350351] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:03.060 [2024-12-09 05:13:45.350388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:03.060 [2024-12-09 05:13:45.365424] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:03.060 [2024-12-09 05:13:45.365463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:03.060 [2024-12-09 05:13:45.380721] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:03.060 [2024-12-09 05:13:45.380757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:03.060 [2024-12-09 05:13:45.394867] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:03.060 [2024-12-09 05:13:45.394897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:03.060 [2024-12-09 05:13:45.408807] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:03.060 [2024-12-09 05:13:45.408835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:03.060 [2024-12-09 05:13:45.423057] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:03.060 [2024-12-09 05:13:45.423085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:03.060 [2024-12-09 05:13:45.437192] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:03.060 [2024-12-09 05:13:45.437221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:03.060 [2024-12-09 05:13:45.451350] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:03.060 [2024-12-09 05:13:45.451376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:03.060 [2024-12-09 05:13:45.466001] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:03.060 [2024-12-09 05:13:45.466033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:03.060 [2024-12-09 05:13:45.481768] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:03.060 [2024-12-09 05:13:45.481799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:03.060 [2024-12-09 05:13:45.495721] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:03.060 [2024-12-09 05:13:45.495760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:03.060 [2024-12-09 05:13:45.510616] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:03.060 [2024-12-09 05:13:45.510651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:03.320 [2024-12-09 05:13:45.526597] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:03.320 [2024-12-09 05:13:45.526629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:03.320 [2024-12-09 05:13:45.540487] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:03.320 [2024-12-09 05:13:45.540518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:03.320 [2024-12-09 05:13:45.554231] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:03.320 [2024-12-09 05:13:45.554276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:03.320 [2024-12-09 05:13:45.568855] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:03.320 [2024-12-09 05:13:45.568902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:03.320 [2024-12-09 05:13:45.583859] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:03.320 [2024-12-09 05:13:45.583898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:03.320 [2024-12-09 05:13:45.599422] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:03.320 [2024-12-09 05:13:45.599467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:03.320 [2024-12-09 05:13:45.613392] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:03.320 [2024-12-09 05:13:45.613426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:03.320 [2024-12-09 05:13:45.628341] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:03.320 [2024-12-09 05:13:45.628366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:03.320 [2024-12-09 05:13:45.643897] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:03.320 [2024-12-09 05:13:45.643930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:03.320 [2024-12-09 05:13:45.659470] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:03.320 [2024-12-09 05:13:45.659502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:03.320 [2024-12-09 05:13:45.674935] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:03.320 [2024-12-09 05:13:45.674964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:03.320 [2024-12-09 05:13:45.689471] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:03.320 [2024-12-09 05:13:45.689500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:03.320 [2024-12-09 05:13:45.703580] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:03.320 [2024-12-09 05:13:45.703609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:03.320 [2024-12-09 05:13:45.717934] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:03.320 [2024-12-09 05:13:45.717963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:03.320 [2024-12-09 05:13:45.732531] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:03.320 [2024-12-09 05:13:45.732561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:03.320 [2024-12-09 05:13:45.743473] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:03.320 [2024-12-09 05:13:45.743498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:03.320 [2024-12-09 05:13:45.758675] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:03.320 [2024-12-09 05:13:45.758705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:03.579 [2024-12-09 05:13:45.774987] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:03.579 [2024-12-09 05:13:45.775046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:03.579 [2024-12-09 05:13:45.786226] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:03.579 [2024-12-09 05:13:45.786254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:03.579 [2024-12-09 05:13:45.793264] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:03.579 [2024-12-09 05:13:45.793293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:03.579 [2024-12-09 05:13:45.808705] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:03.579 [2024-12-09 05:13:45.808738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:03.579 [2024-12-09 05:13:45.823144] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:03.579 [2024-12-09 05:13:45.823173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:03.579 [2024-12-09 05:13:45.833527] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:03.579 [2024-12-09 05:13:45.833574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:03.579 [2024-12-09 05:13:45.848600] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:03.579 [2024-12-09 05:13:45.848629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:03.579 [2024-12-09 05:13:45.864095] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:03.579 [2024-12-09 05:13:45.864127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:03.579 [2024-12-09 05:13:45.878761] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:03.579 [2024-12-09 05:13:45.878790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:03.579 [2024-12-09 05:13:45.894039] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:03.579 [2024-12-09 05:13:45.894069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:03.579 [2024-12-09 05:13:45.908902] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:03.579 [2024-12-09 05:13:45.908930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:03.579 [2024-12-09 05:13:45.925328] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:03.579 [2024-12-09 05:13:45.925370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:03.579 [2024-12-09 05:13:45.941176] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:03.579 [2024-12-09 05:13:45.941208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:03.579 [2024-12-09 05:13:45.954965] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:03.579 [2024-12-09 05:13:45.954994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:03.579 [2024-12-09 05:13:45.969756] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:03.580 [2024-12-09 05:13:45.969786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:03.580 [2024-12-09 05:13:45.980162] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:03.580 [2024-12-09 05:13:45.980204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:03.580 [2024-12-09 05:13:45.995125] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:03.580 [2024-12-09 05:13:45.995154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:03.580 [2024-12-09 05:13:46.006395] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:03.580 [2024-12-09 05:13:46.006424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:03.580 [2024-12-09 05:13:46.021792] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:03.580 [2024-12-09 05:13:46.021822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:03.840 [2024-12-09 05:13:46.037822] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:03.840 [2024-12-09 05:13:46.037859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:03.840 [2024-12-09 05:13:46.054387] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:03.840 [2024-12-09 05:13:46.054417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:03.840 [2024-12-09 05:13:46.069000] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:03.840 [2024-12-09 05:13:46.069036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:03.840 [2024-12-09 05:13:46.084697] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:03.840 [2024-12-09 05:13:46.084731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:03.840 [2024-12-09 05:13:46.100317] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:03.840 [2024-12-09 05:13:46.100357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:03.840 [2024-12-09 05:13:46.115148] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:03.840 [2024-12-09 05:13:46.115178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:03.840 [2024-12-09 05:13:46.131093] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:03.840 [2024-12-09 05:13:46.131124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:03.840 [2024-12-09 05:13:46.144874] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:03.840 [2024-12-09 05:13:46.144906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:03.840 [2024-12-09 05:13:46.159676] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:03.840 [2024-12-09 05:13:46.159708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:03.840 [2024-12-09 05:13:46.172951] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:03.840 [2024-12-09 05:13:46.172981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:03.840 [2024-12-09 05:13:46.187977] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:03.840 [2024-12-09 05:13:46.188007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:03.840 [2024-12-09 05:13:46.203782] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:03.840 [2024-12-09 05:13:46.203815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:03.840 [2024-12-09 05:13:46.217882] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:03.840 [2024-12-09 05:13:46.217916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:03.840 [2024-12-09 05:13:46.232835] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:03.840 [2024-12-09 05:13:46.232868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:03.840 15731.00 IOPS, 122.90 MiB/s [2024-12-09T05:13:46.296Z] [2024-12-09 05:13:46.243793] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:03.840 [2024-12-09 05:13:46.243830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:03.840 [2024-12-09 05:13:46.259443] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:03.840 [2024-12-09 05:13:46.259473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:03.840 [2024-12-09 05:13:46.275529] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:03.840 [2024-12-09 05:13:46.275560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:03.840 [2024-12-09 05:13:46.289384] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:03.840 [2024-12-09 05:13:46.289416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:04.100 [2024-12-09 05:13:46.303813] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:04.100 [2024-12-09 05:13:46.303846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:04.100 [2024-12-09 05:13:46.318343] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:04.100 [2024-12-09 05:13:46.318371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:04.100 [2024-12-09 05:13:46.333905] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:04.100 [2024-12-09 05:13:46.333937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:04.100 [2024-12-09 05:13:46.347767] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:04.100 [2024-12-09 05:13:46.347801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:04.100 [2024-12-09 05:13:46.362666] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:04.100 [2024-12-09 05:13:46.362694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:04.100 [2024-12-09 05:13:46.378424] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:04.100 [2024-12-09 05:13:46.378455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:04.100 [2024-12-09 05:13:46.392598] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:04.100 [2024-12-09 05:13:46.392629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:04.100 [2024-12-09 05:13:46.403518] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:04.100 [2024-12-09 05:13:46.403549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:04.100 [2024-12-09 05:13:46.418331] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:04.100 [2024-12-09 05:13:46.418369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:04.100 [2024-12-09 05:13:46.434064] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:04.100 [2024-12-09 05:13:46.434092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:04.100 [2024-12-09 05:13:46.449031] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:04.100 [2024-12-09 05:13:46.449064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:04.100 [2024-12-09 05:13:46.459942] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:04.100 [2024-12-09 05:13:46.459974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:04.100 [2024-12-09 05:13:46.475058] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:04.100 [2024-12-09 05:13:46.475088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:04.100 [2024-12-09 05:13:46.491190] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:04.100 [2024-12-09 05:13:46.491239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:04.100 [2024-12-09 05:13:46.506169] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:04.100 [2024-12-09 05:13:46.506203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:04.100 [2024-12-09 05:13:46.522615] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:04.100 [2024-12-09 05:13:46.522649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:04.100 [2024-12-09 05:13:46.533879] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:04.100 [2024-12-09 05:13:46.533910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:04.100 [2024-12-09 05:13:46.548750] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:04.100 [2024-12-09 05:13:46.548785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:04.359 [2024-12-09 05:13:46.560542] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:04.359 [2024-12-09 05:13:46.560576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:04.359 [2024-12-09 05:13:46.575936] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:04.359 [2024-12-09 05:13:46.575968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:04.359 [2024-12-09 05:13:46.591615] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:04.359 [2024-12-09 05:13:46.591644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:04.359 [2024-12-09 05:13:46.605983] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:04.359 [2024-12-09 05:13:46.606013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:04.359 [2024-12-09 05:13:46.619758] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:04.359 [2024-12-09 05:13:46.619791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:04.359 [2024-12-09 05:13:46.635003] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:04.359 [2024-12-09 05:13:46.635043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:04.359 [2024-12-09 05:13:46.650989] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:04.359 [2024-12-09 05:13:46.651030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:04.359 [2024-12-09 05:13:46.665625] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:04.359 [2024-12-09 05:13:46.665655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:04.359 [2024-12-09 05:13:46.676851] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:04.359 [2024-12-09 05:13:46.676885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:04.359 [2024-12-09 05:13:46.693213] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:04.359 [2024-12-09 05:13:46.693252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:04.359 [2024-12-09 05:13:46.708944] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:04.360 [2024-12-09 05:13:46.708985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:04.360 [2024-12-09 05:13:46.719973] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:04.360 [2024-12-09 05:13:46.720011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:04.360 [2024-12-09 05:13:46.735143] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:04.360 [2024-12-09 05:13:46.735174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:04.360 [2024-12-09 05:13:46.745819] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:04.360 [2024-12-09 05:13:46.745851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:04.360 [2024-12-09 05:13:46.761232] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:04.360 [2024-12-09 05:13:46.761266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:04.360 [2024-12-09 05:13:46.776889] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:04.360 [2024-12-09 05:13:46.776920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:04.360 [2024-12-09 05:13:46.790828] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:04.360 [2024-12-09 05:13:46.790862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:04.360 [2024-12-09 05:13:46.806042] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:04.360 [2024-12-09 05:13:46.806076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:04.619 [2024-12-09 05:13:46.822475] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:04.619 [2024-12-09 05:13:46.822512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:04.620 [2024-12-09 05:13:46.836368] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:04.620 [2024-12-09 05:13:46.836401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:04.620 [2024-12-09 05:13:46.851443] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:04.620 [2024-12-09 05:13:46.851476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:04.620 [2024-12-09 05:13:46.867058] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:04.620 [2024-12-09 05:13:46.867093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:04.620 [2024-12-09 05:13:46.881714] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:04.620 [2024-12-09 05:13:46.881749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:04.620 [2024-12-09 05:13:46.890147] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:04.620 [2024-12-09 05:13:46.890181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:04.620 [2024-12-09 05:13:46.905019] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:04.620 [2024-12-09 05:13:46.905050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:04.620 [2024-12-09 05:13:46.919804] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:04.620 [2024-12-09 05:13:46.919835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:04.620 [2024-12-09 05:13:46.936503] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:04.620 [2024-12-09 05:13:46.936536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:04.620 [2024-12-09 05:13:46.952569] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:04.620 [2024-12-09 05:13:46.952604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:04.620 [2024-12-09 05:13:46.969534] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:04.620 [2024-12-09 05:13:46.969569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:04.620 [2024-12-09 05:13:46.985888] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:04.620 [2024-12-09 05:13:46.985923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:04.620 [2024-12-09 05:13:47.000621] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:04.620 [2024-12-09 05:13:47.000653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:04.620 [2024-12-09 05:13:47.016283] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:04.620 [2024-12-09 05:13:47.016319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:04.620 [2024-12-09 05:13:47.032594] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:04.620 [2024-12-09 05:13:47.032633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:04.620 [2024-12-09 05:13:47.048471] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:04.620 [2024-12-09 05:13:47.048509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:04.620 [2024-12-09 05:13:47.063002] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:04.620 [2024-12-09 05:13:47.063046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:04.879 [2024-12-09 05:13:47.078309] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:04.879 [2024-12-09 05:13:47.078351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:04.879 [2024-12-09 05:13:47.094533] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:04.879 [2024-12-09 05:13:47.094568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:04.879 [2024-12-09 05:13:47.112008] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:04.879 [2024-12-09 05:13:47.112047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:04.879 [2024-12-09 05:13:47.128003] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:04.879 [2024-12-09 05:13:47.128039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:04.879 [2024-12-09 05:13:47.139155] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:04.879 [2024-12-09 05:13:47.139188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:04.879 [2024-12-09 05:13:47.155403] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:04.879 [2024-12-09 05:13:47.155430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:04.879 [2024-12-09 05:13:47.170306] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:04.879 [2024-12-09 05:13:47.170351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:04.879 [2024-12-09 05:13:47.186403] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:04.879 [2024-12-09 05:13:47.186438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:04.879 [2024-12-09 05:13:47.202140] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:04.879 [2024-12-09 05:13:47.202175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:04.879 [2024-12-09 05:13:47.213526] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:04.879 [2024-12-09 05:13:47.213558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:04.879 [2024-12-09 05:13:47.228577] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:04.879 [2024-12-09 05:13:47.228610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:04.879 15452.00 IOPS, 120.72 MiB/s [2024-12-09T05:13:47.335Z] [2024-12-09 05:13:47.244082] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:04.879 [2024-12-09 05:13:47.244116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:04.879 [2024-12-09 05:13:47.257714] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:04.879 [2024-12-09 05:13:47.257748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:04.879 [2024-12-09 05:13:47.272362] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:04.879 [2024-12-09 05:13:47.272394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:04.879 [2024-12-09 05:13:47.283691] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:04.879 [2024-12-09 05:13:47.283727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:04.879 [2024-12-09 05:13:47.298243] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:04.879 [2024-12-09 05:13:47.298274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:04.879 [2024-12-09 05:13:47.309968] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:04.879 [2024-12-09 05:13:47.310001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:04.879 [2024-12-09 05:13:47.325421] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:04.879 [2024-12-09 05:13:47.325449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:05.138 [2024-12-09 05:13:47.340928] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:05.138 [2024-12-09 05:13:47.340962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:05.138 [2024-12-09 05:13:47.355107] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:05.138 [2024-12-09 05:13:47.355147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:05.138 [2024-12-09 05:13:47.370632] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:05.138 [2024-12-09 05:13:47.370666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:05.138 [2024-12-09 05:13:47.387746] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:05.138 [2024-12-09 05:13:47.387784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:05.138 [2024-12-09 05:13:47.403782] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:05.138 [2024-12-09 05:13:47.403820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:05.138 [2024-12-09 05:13:47.419908] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:05.138 [2024-12-09 05:13:47.419944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:05.138 [2024-12-09 05:13:47.434176] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:05.138 [2024-12-09 05:13:47.434209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:05.138 [2024-12-09 05:13:47.449426] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:05.138 [2024-12-09 05:13:47.449458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:05.138 [2024-12-09 05:13:47.465690] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:05.139 [2024-12-09 05:13:47.465724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:05.139 [2024-12-09 05:13:47.477005] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:05.139 [2024-12-09 05:13:47.477042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:05.139 [2024-12-09 05:13:47.492124] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:05.139 [2024-12-09 05:13:47.492158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:05.139 [2024-12-09 05:13:47.507756] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:05.139 [2024-12-09 05:13:47.507789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:05.139 [2024-12-09 05:13:47.522150] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:05.139 [2024-12-09 05:13:47.522181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:05.139 [2024-12-09 05:13:47.533239] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:05.139 [2024-12-09 05:13:47.533271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:05.139 [2024-12-09 05:13:47.548458] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:05.139 [2024-12-09 05:13:47.548490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:05.139 [2024-12-09 05:13:47.564342] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:05.139 [2024-12-09 05:13:47.564375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:05.139 [2024-12-09 05:13:47.579336] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:05.139 [2024-12-09 05:13:47.579370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:05.139 [2024-12-09 05:13:47.591261] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:05.139 [2024-12-09 05:13:47.591314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:05.398 [2024-12-09 05:13:47.606820] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:05.398 [2024-12-09 05:13:47.606852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:05.398 [2024-12-09 05:13:47.623764] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:05.398 [2024-12-09 05:13:47.623800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:05.398 [2024-12-09 05:13:47.640782] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:05.398 [2024-12-09 05:13:47.640819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:05.398 [2024-12-09 05:13:47.656421] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:05.398 [2024-12-09 05:13:47.656452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:05.398 [2024-12-09 05:13:47.670594] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:05.398 [2024-12-09 05:13:47.670625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:05.398 [2024-12-09 05:13:47.685307] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:05.398 [2024-12-09 05:13:47.685344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:05.398 [2024-12-09 05:13:47.702108] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:05.398 [2024-12-09 05:13:47.702141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:05.398 [2024-12-09 05:13:47.718440] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:05.398 [2024-12-09 05:13:47.718468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:05.398 [2024-12-09 05:13:47.734740] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:05.398 [2024-12-09 05:13:47.734772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:05.398 [2024-12-09 05:13:47.748823] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:05.398 [2024-12-09 05:13:47.748851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:05.398 [2024-12-09 05:13:47.763477] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:05.398 [2024-12-09 05:13:47.763507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:05.398 [2024-12-09 05:13:47.777431] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:05.398 [2024-12-09 05:13:47.777462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:05.398 [2024-12-09 05:13:47.792361] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:05.398 [2024-12-09 05:13:47.792391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:05.398 [2024-12-09 05:13:47.808154] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:05.398 [2024-12-09 05:13:47.808187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:05.398 [2024-12-09 05:13:47.822201] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:05.398 [2024-12-09 05:13:47.822236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:05.398 [2024-12-09 05:13:47.836937] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:05.398 [2024-12-09 05:13:47.836972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:05.398 [2024-12-09 05:13:47.847661] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:05.398 [2024-12-09 05:13:47.847694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:05.658 [2024-12-09 05:13:47.863407] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:05.658 [2024-12-09 05:13:47.863441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:05.658 [2024-12-09 05:13:47.879262] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:05.658 [2024-12-09 05:13:47.879293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:05.658 [2024-12-09 05:13:47.891473] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:05.658 [2024-12-09 05:13:47.891505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:05.658 [2024-12-09 05:13:47.907025] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:05.658 [2024-12-09 05:13:47.907065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:05.658 [2024-12-09 05:13:47.922969] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:05.658 [2024-12-09 05:13:47.922997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:05.658 [2024-12-09 05:13:47.934187] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:05.658 [2024-12-09 05:13:47.934218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:05.658 [2024-12-09 05:13:47.949407] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:05.658 [2024-12-09 05:13:47.949438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:05.658 [2024-12-09 05:13:47.966791] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:05.658 [2024-12-09 05:13:47.966818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:05.658 [2024-12-09 05:13:47.983413] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:05.658 [2024-12-09 05:13:47.983449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:05.658 [2024-12-09 05:13:47.999804] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:05.658 [2024-12-09 05:13:47.999840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:05.658 [2024-12-09 05:13:48.014400] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:05.658 [2024-12-09 05:13:48.014432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:05.658 [2024-12-09 05:13:48.026083] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:05.658 [2024-12-09 05:13:48.026112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:05.658 [2024-12-09 05:13:48.041835] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:05.658 [2024-12-09 05:13:48.041863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:05.658 [2024-12-09 05:13:48.057603] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:05.658 [2024-12-09 05:13:48.057636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:05.658 [2024-12-09 05:13:48.071868] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:05.658 [2024-12-09 05:13:48.071901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:05.658 [2024-12-09 05:13:48.087048] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:05.658 [2024-12-09 05:13:48.087079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:05.659 [2024-12-09 05:13:48.102296] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:05.659 [2024-12-09 05:13:48.102335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:05.918 [2024-12-09 05:13:48.118031] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:05.918 [2024-12-09 05:13:48.118060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:05.918 [2024-12-09 05:13:48.133845] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:05.918 [2024-12-09 05:13:48.133874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:05.918 [2024-12-09 05:13:48.148162] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:05.918 [2024-12-09 05:13:48.148194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:05.918 [2024-12-09 05:13:48.159559] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:05.918 [2024-12-09 05:13:48.159591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:05.918 [2024-12-09 05:13:48.174694] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:05.918 [2024-12-09 05:13:48.174725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:05.918 [2024-12-09 05:13:48.191475] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:05.918 [2024-12-09 05:13:48.191506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:05.918 [2024-12-09 05:13:48.207125] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:05.918 [2024-12-09 05:13:48.207157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:05.918 [2024-12-09 05:13:48.221706] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:05.918 [2024-12-09 05:13:48.221737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:05.918 [2024-12-09 05:13:48.232518] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:05.918 [2024-12-09 05:13:48.232549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:05.918 15313.00 IOPS, 119.63 MiB/s [2024-12-09T05:13:48.374Z] [2024-12-09 05:13:48.247611] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:05.918 [2024-12-09 05:13:48.247643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:05.918 [2024-12-09 05:13:48.263744] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:05.918 [2024-12-09 05:13:48.263776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:05.918 [2024-12-09 05:13:48.277797] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:05.918 [2024-12-09 05:13:48.277828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:05.918 [2024-12-09 05:13:48.288663] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:05.918 [2024-12-09 05:13:48.288694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:05.918 [2024-12-09 05:13:48.303534] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:05.918 [2024-12-09 05:13:48.303563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:05.918 [2024-12-09 05:13:48.314569] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:05.918 [2024-12-09 05:13:48.314598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:05.918 [2024-12-09 05:13:48.329746] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:05.918 [2024-12-09 05:13:48.329777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:05.918 [2024-12-09 05:13:48.345733] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:05.918 [2024-12-09 05:13:48.345766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:05.918 [2024-12-09 05:13:48.356734] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:05.918 [2024-12-09 05:13:48.356764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:05.918 [2024-12-09 05:13:48.371281] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:05.918 [2024-12-09 05:13:48.371312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:06.178 [2024-12-09 05:13:48.382480] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:06.178 [2024-12-09 05:13:48.382509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:06.178 [2024-12-09 05:13:48.397590] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:06.178 [2024-12-09 05:13:48.397622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:06.178 [2024-12-09 05:13:48.413060] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:06.178 [2024-12-09 05:13:48.413090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:06.178 [2024-12-09 05:13:48.427667] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:06.178 [2024-12-09 05:13:48.427698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:06.178 [2024-12-09 05:13:48.438829] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:06.178 [2024-12-09 05:13:48.438861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:06.178 [2024-12-09 05:13:48.454106] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:06.178 [2024-12-09 05:13:48.454134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:06.178 [2024-12-09 05:13:48.469732] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:06.178 [2024-12-09 05:13:48.469767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:06.178 [2024-12-09 05:13:48.484932] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:06.178 [2024-12-09 05:13:48.484963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:06.178 [2024-12-09 05:13:48.500631] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:06.178 [2024-12-09 05:13:48.500662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:06.178 [2024-12-09 05:13:48.515354] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:06.178 [2024-12-09 05:13:48.515383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:06.179 [2024-12-09 05:13:48.526451] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:06.179 [2024-12-09 05:13:48.526482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:06.179 [2024-12-09 05:13:48.541425] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:06.179 [2024-12-09 05:13:48.541452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:06.179 [2024-12-09 05:13:48.556960] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:06.179 [2024-12-09 05:13:48.556989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:06.179 [2024-12-09 05:13:48.571687] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:06.179 [2024-12-09 05:13:48.571719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:06.179 [2024-12-09 05:13:48.587654] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:06.179 [2024-12-09 05:13:48.587686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:06.179 [2024-12-09 05:13:48.601938] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:06.179 [2024-12-09 05:13:48.601969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:06.179 [2024-12-09 05:13:48.613045] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:06.179 [2024-12-09 05:13:48.613078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:06.179 [2024-12-09 05:13:48.628049] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:06.179 [2024-12-09 05:13:48.628080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:06.439 [2024-12-09 05:13:48.644225] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:06.439 [2024-12-09 05:13:48.644257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:06.439 [2024-12-09 05:13:48.655248] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:06.439 [2024-12-09 05:13:48.655279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:06.439 [2024-12-09 05:13:48.670206] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:06.439 [2024-12-09 05:13:48.670237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:06.439 [2024-12-09 05:13:48.686044] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:06.439 [2024-12-09 05:13:48.686077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:06.439 [2024-12-09 05:13:48.700557] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:06.439 [2024-12-09 05:13:48.700586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:06.439 [2024-12-09 05:13:48.714602] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:06.439 [2024-12-09 05:13:48.714628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:06.439 [2024-12-09 05:13:48.729824] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:06.439 [2024-12-09 05:13:48.729851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:06.439 [2024-12-09 05:13:48.745805] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:06.439 [2024-12-09 05:13:48.745837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:06.439 [2024-12-09 05:13:48.757001] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:06.439 [2024-12-09 05:13:48.757033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:06.439 [2024-12-09 05:13:48.772064] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:06.439 [2024-12-09 05:13:48.772096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:06.439 [2024-12-09 05:13:48.782942] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:06.439 [2024-12-09 05:13:48.782972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:06.439 [2024-12-09 05:13:48.797750] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:06.439 [2024-12-09 05:13:48.797777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:06.439 [2024-12-09 05:13:48.808986] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:06.439 [2024-12-09 05:13:48.809016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:06.439 [2024-12-09 05:13:48.824067] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:06.439 [2024-12-09 05:13:48.824099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:06.439 [2024-12-09 05:13:48.839365] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:06.439 [2024-12-09 05:13:48.839398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:06.439 [2024-12-09 05:13:48.854060] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:06.439 [2024-12-09 05:13:48.854090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:06.439 [2024-12-09 05:13:48.869254] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:06.439 [2024-12-09 05:13:48.869285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:06.439 [2024-12-09 05:13:48.884229] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:06.439 [2024-12-09 05:13:48.884260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:06.699 [2024-12-09 05:13:48.899056] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:06.699 [2024-12-09 05:13:48.899085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:06.699 [2024-12-09 05:13:48.913950] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:06.699 [2024-12-09 05:13:48.913980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:06.699 [2024-12-09 05:13:48.924793] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:06.699 [2024-12-09 05:13:48.924821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:06.699 [2024-12-09 05:13:48.939896] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:06.699 [2024-12-09 05:13:48.939930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:06.699 [2024-12-09 05:13:48.954766] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:06.699 [2024-12-09 05:13:48.954797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:06.699 [2024-12-09 05:13:48.969410] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:06.699 [2024-12-09 05:13:48.969442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:06.699 [2024-12-09 05:13:48.981030] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:06.699 [2024-12-09 05:13:48.981063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:06.699 [2024-12-09 05:13:48.996536] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:06.699 [2024-12-09 05:13:48.996567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:06.699 [2024-12-09 05:13:49.016807] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:06.699 [2024-12-09 05:13:49.016840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:06.699 [2024-12-09 05:13:49.028230] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:06.699 [2024-12-09 05:13:49.028273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:06.699 [2024-12-09 05:13:49.043694] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:06.699 [2024-12-09 05:13:49.043727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:06.699 [2024-12-09 05:13:49.060691] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:06.699 [2024-12-09 05:13:49.060723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:06.699 [2024-12-09 05:13:49.077150] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:06.699 [2024-12-09 05:13:49.077182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:06.699 [2024-12-09 05:13:49.088097] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:06.699 [2024-12-09 05:13:49.088129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:06.699 [2024-12-09 05:13:49.103270] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:06.699 [2024-12-09 05:13:49.103301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:06.699 [2024-12-09 05:13:49.118535] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:06.700 [2024-12-09 05:13:49.118565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:06.700 [2024-12-09 05:13:49.132914] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:06.700 [2024-12-09 05:13:49.132943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:06.700 [2024-12-09 05:13:49.147174] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:06.700 [2024-12-09 05:13:49.147234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:06.960 [2024-12-09 05:13:49.162199] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:06.960 [2024-12-09 05:13:49.162231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:06.960 [2024-12-09 05:13:49.176389] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:06.960 [2024-12-09 05:13:49.176429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:06.960 [2024-12-09 05:13:49.191892] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:06.960 [2024-12-09 05:13:49.191923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:06.960 [2024-12-09 05:13:49.207188] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:06.960 [2024-12-09 05:13:49.207220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:06.960 [2024-12-09 05:13:49.221378] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:06.960 [2024-12-09 05:13:49.221407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:06.960 15400.75 IOPS, 120.32 MiB/s [2024-12-09T05:13:49.416Z] [2024-12-09 05:13:49.235698] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:06.960 [2024-12-09 05:13:49.235732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:06.960 [2024-12-09 05:13:49.250254] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:06.960 [2024-12-09 05:13:49.250286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:06.960 [2024-12-09 05:13:49.266944] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:06.960 [2024-12-09 05:13:49.266978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:06.960 [2024-12-09 05:13:49.278008] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:06.960 [2024-12-09 05:13:49.278040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:06.960 [2024-12-09 05:13:49.293638] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:06.960 [2024-12-09 05:13:49.293668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:06.960 [2024-12-09 05:13:49.309111] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:06.960 [2024-12-09 05:13:49.309143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:06.960 [2024-12-09 05:13:49.325087] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:06.960 [2024-12-09 05:13:49.325117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:06.960 [2024-12-09 05:13:49.336218] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:06.960 [2024-12-09 05:13:49.336250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:06.960 [2024-12-09 05:13:49.343255] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:06.960 [2024-12-09 05:13:49.343284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:06.960 [2024-12-09 05:13:49.359412] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:06.960 [2024-12-09 05:13:49.359447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:06.960 [2024-12-09 05:13:49.376350] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:06.960 [2024-12-09 05:13:49.376383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:06.960 [2024-12-09 05:13:49.392896] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:06.960 [2024-12-09 05:13:49.392931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:06.960 [2024-12-09 05:13:49.409804] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:06.960 [2024-12-09 05:13:49.409838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:07.220 [2024-12-09 05:13:49.426057] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:07.220 [2024-12-09 05:13:49.426085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:07.220 [2024-12-09 05:13:49.442192] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:07.220 [2024-12-09 05:13:49.442225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:07.220 [2024-12-09 05:13:49.456314] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:07.220 [2024-12-09 05:13:49.456356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:07.221 [2024-12-09 05:13:49.471845] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:07.221 [2024-12-09 05:13:49.471878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:07.221 [2024-12-09 05:13:49.488302] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:07.221 [2024-12-09 05:13:49.488347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:07.221 [2024-12-09 05:13:49.504815] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:07.221 [2024-12-09 05:13:49.504846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:07.221 [2024-12-09 05:13:49.521312] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:07.221 [2024-12-09 05:13:49.521352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:07.221 [2024-12-09 05:13:49.537600] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:07.221 [2024-12-09 05:13:49.537631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:07.221 [2024-12-09 05:13:49.554466] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:07.221 [2024-12-09 05:13:49.554497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:07.221 [2024-12-09 05:13:49.570994] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:07.221 [2024-12-09 05:13:49.571034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:07.221 [2024-12-09 05:13:49.587484] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:07.221 [2024-12-09 05:13:49.587519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:07.221 [2024-12-09 05:13:49.598871] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:07.221 [2024-12-09 05:13:49.598905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:07.221 [2024-12-09 05:13:49.614336] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:07.221 [2024-12-09 05:13:49.614368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:07.221 [2024-12-09 05:13:49.631661] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:07.221 [2024-12-09 05:13:49.631694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:07.221 [2024-12-09 05:13:49.647729] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:07.221 [2024-12-09 05:13:49.647770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:07.221 [2024-12-09 05:13:49.664643] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:07.221 [2024-12-09 05:13:49.664677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:07.481 [2024-12-09 05:13:49.681555] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:07.481 [2024-12-09 05:13:49.681591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:07.481 [2024-12-09 05:13:49.698045] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:07.481 [2024-12-09 05:13:49.698077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:07.481 [2024-12-09 05:13:49.714712] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:07.481 [2024-12-09 05:13:49.714743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:07.481 [2024-12-09 05:13:49.731966] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:07.481 [2024-12-09 05:13:49.731999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:07.481 [2024-12-09 05:13:49.748885] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:07.481 [2024-12-09 05:13:49.748918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:07.481 [2024-12-09 05:13:49.766890] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:07.481 [2024-12-09 05:13:49.766925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:07.481 [2024-12-09 05:13:49.782450] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:07.481 [2024-12-09 05:13:49.782483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:07.481 [2024-12-09 05:13:49.797394] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:07.481 [2024-12-09 05:13:49.797427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:07.481 [2024-12-09 05:13:49.809398] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:07.481 [2024-12-09 05:13:49.809428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:07.481 [2024-12-09 05:13:49.824603] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:07.481 [2024-12-09 05:13:49.824645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:07.481 [2024-12-09 05:13:49.841389] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:07.481 [2024-12-09 05:13:49.841422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:07.481 [2024-12-09 05:13:49.858109] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:07.481 [2024-12-09 05:13:49.858142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:07.481 [2024-12-09 05:13:49.874985] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:07.481 [2024-12-09 05:13:49.875025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:07.481 [2024-12-09 05:13:49.891735] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:07.481 [2024-12-09 05:13:49.891769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:07.481 [2024-12-09 05:13:49.908844] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:07.481 [2024-12-09 05:13:49.908876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:07.481 [2024-12-09 05:13:49.925157] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:07.481 [2024-12-09 05:13:49.925190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:07.741 [2024-12-09 05:13:49.942279] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:07.741 [2024-12-09 05:13:49.942308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:07.741 [2024-12-09 05:13:49.957833] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:07.741 [2024-12-09 05:13:49.957867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:07.741 [2024-12-09 05:13:49.974072] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:07.741 [2024-12-09 05:13:49.974104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:07.741 [2024-12-09 05:13:49.989996] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:07.741 [2024-12-09 05:13:49.990034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:07.741 [2024-12-09 05:13:50.004977] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:07.741 [2024-12-09 05:13:50.005011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:07.741 [2024-12-09 05:13:50.016360] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:07.741 [2024-12-09 05:13:50.016394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:07.741 [2024-12-09 05:13:50.032209] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:07.741 [2024-12-09 05:13:50.032254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:07.741 [2024-12-09 05:13:50.049071] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:07.741 [2024-12-09 05:13:50.049109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:07.741 [2024-12-09 05:13:50.065783] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:07.741 [2024-12-09 05:13:50.065826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:07.741 [2024-12-09 05:13:50.082420] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:07.741 [2024-12-09 05:13:50.082478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:07.741 [2024-12-09 05:13:50.099876] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:07.741 [2024-12-09 05:13:50.099912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:07.741 [2024-12-09 05:13:50.116320] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:07.741 [2024-12-09 05:13:50.116384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:07.741 [2024-12-09 05:13:50.133105] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:07.741 [2024-12-09 05:13:50.133144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:07.741 [2024-12-09 05:13:50.149915] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:07.741 [2024-12-09 05:13:50.149949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:07.741 [2024-12-09 05:13:50.167264] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:07.741 [2024-12-09 05:13:50.167299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:07.741 [2024-12-09 05:13:50.183177] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:07.741 [2024-12-09 05:13:50.183213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:08.000 [2024-12-09 05:13:50.200041] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:08.000 [2024-12-09 05:13:50.200074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:08.000 [2024-12-09 05:13:50.216175] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:08.000 [2024-12-09 05:13:50.216211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:08.000 15166.20 IOPS, 118.49 MiB/s [2024-12-09T05:13:50.456Z] [2024-12-09 05:13:50.234566] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:08.000 [2024-12-09 05:13:50.234596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:08.000 01:19:08.000 Latency(us) 01:19:08.000 [2024-12-09T05:13:50.456Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:19:08.000 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 01:19:08.000 Nvme1n1 : 5.01 15168.67 118.51 0.00 0.00 8430.13 2818.91 17857.84 01:19:08.000 [2024-12-09T05:13:50.456Z] =================================================================================================================== 01:19:08.000 [2024-12-09T05:13:50.456Z] Total : 15168.67 118.51 0.00 0.00 8430.13 2818.91 17857.84 01:19:08.000 [2024-12-09 05:13:50.244297] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:08.000 [2024-12-09 05:13:50.244334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:08.000 [2024-12-09 05:13:50.256266] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:08.000 [2024-12-09 05:13:50.256298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:08.000 [2024-12-09 05:13:50.268255] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:08.000 [2024-12-09 05:13:50.268285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:08.000 [2024-12-09 05:13:50.280222] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:08.000 [2024-12-09 05:13:50.280259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:08.000 [2024-12-09 05:13:50.292249] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:08.000 [2024-12-09 05:13:50.292283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:08.000 [2024-12-09 05:13:50.304186] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:08.001 [2024-12-09 05:13:50.304220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:08.001 [2024-12-09 05:13:50.316164] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:08.001 [2024-12-09 05:13:50.316193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:08.001 [2024-12-09 05:13:50.328153] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:08.001 [2024-12-09 05:13:50.328184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:08.001 [2024-12-09 05:13:50.340132] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:08.001 [2024-12-09 05:13:50.340161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:08.001 [2024-12-09 05:13:50.348112] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:08.001 [2024-12-09 05:13:50.348137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:08.001 [2024-12-09 05:13:50.356097] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:08.001 [2024-12-09 05:13:50.356120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:08.001 [2024-12-09 05:13:50.368081] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:08.001 [2024-12-09 05:13:50.368104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:08.001 [2024-12-09 05:13:50.380059] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:08.001 [2024-12-09 05:13:50.380085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:08.001 [2024-12-09 05:13:50.392034] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:08.001 [2024-12-09 05:13:50.392056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:08.001 [2024-12-09 05:13:50.404020] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:08.001 [2024-12-09 05:13:50.404044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:08.001 [2024-12-09 05:13:50.415993] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:08.001 [2024-12-09 05:13:50.416014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:08.001 [2024-12-09 05:13:50.427970] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:08.001 [2024-12-09 05:13:50.427990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:08.001 [2024-12-09 05:13:50.439957] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:08.001 [2024-12-09 05:13:50.439975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:08.001 [2024-12-09 05:13:50.451930] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:19:08.001 [2024-12-09 05:13:50.451950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:08.262 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (65574) - No such process 01:19:08.262 05:13:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 65574 01:19:08.262 05:13:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:19:08.262 05:13:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 01:19:08.262 05:13:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:19:08.262 05:13:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:19:08.262 05:13:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 01:19:08.262 05:13:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 01:19:08.262 05:13:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:19:08.262 delay0 01:19:08.262 05:13:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:19:08.262 05:13:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 01:19:08.262 05:13:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 01:19:08.262 05:13:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:19:08.262 05:13:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:19:08.262 05:13:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 01:19:08.262 [2024-12-09 05:13:50.683747] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 01:19:14.828 Initializing NVMe Controllers 01:19:14.828 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 01:19:14.828 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 01:19:14.828 Initialization complete. Launching workers. 01:19:14.828 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 936 01:19:14.828 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1223, failed to submit 33 01:19:14.828 success 1113, unsuccessful 110, failed 0 01:19:14.828 05:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 01:19:14.828 05:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 01:19:14.828 05:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 01:19:14.828 05:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 01:19:14.828 05:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:19:14.828 05:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 01:19:14.828 05:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 01:19:14.828 05:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:19:14.828 rmmod nvme_tcp 01:19:14.828 rmmod nvme_fabrics 01:19:14.828 rmmod nvme_keyring 01:19:14.828 05:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:19:14.828 05:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 01:19:14.828 05:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 01:19:14.828 05:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 65426 ']' 01:19:14.828 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 65426 01:19:14.828 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 65426 ']' 01:19:14.828 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 65426 01:19:14.828 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 01:19:14.828 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:19:14.828 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65426 01:19:14.829 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:19:14.829 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:19:14.829 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65426' 01:19:14.829 killing process with pid 65426 01:19:14.829 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 65426 01:19:14.829 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 65426 01:19:14.829 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:19:14.829 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:19:14.829 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:19:14.829 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 01:19:14.829 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 01:19:14.829 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 01:19:14.829 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:19:14.829 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:19:14.829 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:19:14.829 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:19:15.137 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:19:15.137 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:19:15.137 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:19:15.137 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:19:15.137 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:19:15.137 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:19:15.137 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:19:15.137 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:19:15.137 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:19:15.137 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:19:15.137 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:19:15.137 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:19:15.137 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 01:19:15.137 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:19:15.137 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:19:15.137 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:19:15.397 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 01:19:15.397 01:19:15.397 real 0m24.539s 01:19:15.397 user 0m40.675s 01:19:15.397 sys 0m6.525s 01:19:15.397 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 01:19:15.397 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:19:15.397 ************************************ 01:19:15.397 END TEST nvmf_zcopy 01:19:15.397 ************************************ 01:19:15.397 05:13:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 01:19:15.397 05:13:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:19:15.397 05:13:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 01:19:15.397 05:13:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 01:19:15.397 ************************************ 01:19:15.397 START TEST nvmf_nmic 01:19:15.397 ************************************ 01:19:15.397 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 01:19:15.397 * Looking for test storage... 01:19:15.397 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:19:15.397 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:19:15.397 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 01:19:15.397 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:19:15.658 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:19:15.658 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:19:15.658 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 01:19:15.658 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 01:19:15.658 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 01:19:15.658 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 01:19:15.658 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 01:19:15.658 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 01:19:15.658 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 01:19:15.658 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 01:19:15.658 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 01:19:15.658 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:19:15.658 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 01:19:15.658 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 01:19:15.658 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 01:19:15.658 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:19:15.658 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 01:19:15.658 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 01:19:15.658 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:19:15.658 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 01:19:15.658 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 01:19:15.658 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 01:19:15.658 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 01:19:15.658 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:19:15.658 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 01:19:15.658 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 01:19:15.658 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:19:15.658 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:19:15.658 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 01:19:15.658 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:19:15.658 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:19:15.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:19:15.658 --rc genhtml_branch_coverage=1 01:19:15.658 --rc genhtml_function_coverage=1 01:19:15.658 --rc genhtml_legend=1 01:19:15.658 --rc geninfo_all_blocks=1 01:19:15.658 --rc geninfo_unexecuted_blocks=1 01:19:15.658 01:19:15.658 ' 01:19:15.658 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:19:15.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:19:15.658 --rc genhtml_branch_coverage=1 01:19:15.658 --rc genhtml_function_coverage=1 01:19:15.658 --rc genhtml_legend=1 01:19:15.658 --rc geninfo_all_blocks=1 01:19:15.658 --rc geninfo_unexecuted_blocks=1 01:19:15.658 01:19:15.658 ' 01:19:15.658 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:19:15.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:19:15.658 --rc genhtml_branch_coverage=1 01:19:15.658 --rc genhtml_function_coverage=1 01:19:15.658 --rc genhtml_legend=1 01:19:15.658 --rc geninfo_all_blocks=1 01:19:15.658 --rc geninfo_unexecuted_blocks=1 01:19:15.658 01:19:15.658 ' 01:19:15.658 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:19:15.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:19:15.658 --rc genhtml_branch_coverage=1 01:19:15.658 --rc genhtml_function_coverage=1 01:19:15.658 --rc genhtml_legend=1 01:19:15.658 --rc geninfo_all_blocks=1 01:19:15.658 --rc geninfo_unexecuted_blocks=1 01:19:15.658 01:19:15.658 ' 01:19:15.658 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:19:15.658 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 01:19:15.658 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:19:15.658 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:19:15.658 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:19:15.658 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:19:15.658 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:19:15.658 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:19:15.658 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:19:15.658 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:19:15.658 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:19:15.658 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:19:15.658 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:19:15.658 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:19:15.658 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:19:15.658 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:19:15.658 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:19:15.658 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:19:15.658 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:19:15.658 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 01:19:15.658 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:19:15.658 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:19:15.658 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:19:15.658 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:19:15.658 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:19:15.658 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:19:15.658 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 01:19:15.658 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:19:15.658 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 01:19:15.658 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:19:15.658 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:19:15.658 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:19:15.658 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:19:15.658 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:19:15.658 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:19:15.658 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:19:15.658 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:19:15.658 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:19:15.658 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 01:19:15.659 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 01:19:15.659 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:19:15.659 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 01:19:15.659 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:19:15.659 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:19:15.659 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 01:19:15.659 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 01:19:15.659 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 01:19:15.659 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:19:15.659 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:19:15.659 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:19:15.659 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:19:15.659 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:19:15.659 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:19:15.659 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:19:15.659 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:19:15.659 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 01:19:15.659 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:19:15.659 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:19:15.659 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:19:15.659 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:19:15.659 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:19:15.659 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:19:15.659 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:19:15.659 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:19:15.659 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:19:15.659 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:19:15.659 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:19:15.659 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:19:15.659 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:19:15.659 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:19:15.659 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:19:15.659 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:19:15.659 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:19:15.659 Cannot find device "nvmf_init_br" 01:19:15.659 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 01:19:15.659 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:19:15.659 Cannot find device "nvmf_init_br2" 01:19:15.659 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 01:19:15.659 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:19:15.659 Cannot find device "nvmf_tgt_br" 01:19:15.659 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 01:19:15.659 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:19:15.659 Cannot find device "nvmf_tgt_br2" 01:19:15.659 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 01:19:15.659 05:13:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:19:15.659 Cannot find device "nvmf_init_br" 01:19:15.659 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 01:19:15.659 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:19:15.659 Cannot find device "nvmf_init_br2" 01:19:15.659 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 01:19:15.659 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:19:15.659 Cannot find device "nvmf_tgt_br" 01:19:15.659 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 01:19:15.659 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:19:15.659 Cannot find device "nvmf_tgt_br2" 01:19:15.659 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 01:19:15.659 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:19:15.659 Cannot find device "nvmf_br" 01:19:15.659 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 01:19:15.659 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:19:15.659 Cannot find device "nvmf_init_if" 01:19:15.659 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 01:19:15.659 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:19:15.659 Cannot find device "nvmf_init_if2" 01:19:15.659 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 01:19:15.659 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:19:15.659 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:19:15.659 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 01:19:15.659 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:19:15.659 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:19:15.659 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 01:19:15.659 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:19:15.920 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:19:15.920 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:19:15.920 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:19:15.920 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:19:15.920 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:19:15.920 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:19:15.920 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:19:15.920 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:19:15.920 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:19:15.920 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:19:15.920 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:19:15.920 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:19:15.920 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:19:15.920 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:19:15.920 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:19:15.920 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:19:15.920 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:19:15.920 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:19:15.920 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:19:15.920 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:19:15.920 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:19:15.920 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:19:15.920 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:19:15.920 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:19:15.920 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:19:15.920 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:19:15.920 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:19:15.920 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:19:15.920 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:19:15.920 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:19:15.920 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:19:15.920 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:19:15.920 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:19:15.920 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 01:19:15.920 01:19:15.920 --- 10.0.0.3 ping statistics --- 01:19:15.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:19:15.920 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 01:19:15.920 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:19:15.920 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:19:15.920 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 01:19:15.920 01:19:15.920 --- 10.0.0.4 ping statistics --- 01:19:15.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:19:15.920 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 01:19:15.920 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:19:15.920 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:19:15.920 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 01:19:15.920 01:19:15.920 --- 10.0.0.1 ping statistics --- 01:19:15.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:19:15.920 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 01:19:15.920 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:19:15.920 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:19:15.920 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 01:19:15.920 01:19:15.920 --- 10.0.0.2 ping statistics --- 01:19:15.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:19:15.920 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 01:19:15.920 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:19:15.920 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 01:19:15.920 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:19:15.920 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:19:15.920 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:19:15.920 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:19:15.920 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:19:15.920 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:19:15.920 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:19:15.920 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 01:19:15.920 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:19:15.920 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 01:19:15.920 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:19:15.920 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=65953 01:19:15.920 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 65953 01:19:15.920 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 65953 ']' 01:19:15.920 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:19:15.920 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 01:19:15.920 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:19:15.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:19:15.920 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 01:19:15.920 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 01:19:15.920 05:13:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:19:15.920 [2024-12-09 05:13:58.355087] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:19:15.920 [2024-12-09 05:13:58.355154] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:19:16.179 [2024-12-09 05:13:58.508054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:19:16.179 [2024-12-09 05:13:58.564526] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:19:16.179 [2024-12-09 05:13:58.564573] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:19:16.179 [2024-12-09 05:13:58.564580] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:19:16.179 [2024-12-09 05:13:58.564586] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:19:16.179 [2024-12-09 05:13:58.564591] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:19:16.179 [2024-12-09 05:13:58.565461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:19:16.179 [2024-12-09 05:13:58.565700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:19:16.179 [2024-12-09 05:13:58.565852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:19:16.179 [2024-12-09 05:13:58.565852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:19:16.179 [2024-12-09 05:13:58.608447] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:19:17.133 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:19:17.133 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 01:19:17.133 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:19:17.133 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 01:19:17.133 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:19:17.133 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:19:17.133 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:19:17.133 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 01:19:17.133 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:19:17.133 [2024-12-09 05:13:59.324017] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:19:17.133 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:19:17.133 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 01:19:17.133 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 01:19:17.133 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:19:17.133 Malloc0 01:19:17.133 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:19:17.133 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 01:19:17.133 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 01:19:17.133 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:19:17.133 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:19:17.133 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:19:17.133 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 01:19:17.133 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:19:17.133 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:19:17.133 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:19:17.133 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 01:19:17.133 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:19:17.133 [2024-12-09 05:13:59.401006] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:19:17.133 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:19:17.133 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 01:19:17.133 test case1: single bdev can't be used in multiple subsystems 01:19:17.133 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 01:19:17.133 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 01:19:17.133 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:19:17.133 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:19:17.133 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 01:19:17.133 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 01:19:17.133 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:19:17.133 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:19:17.133 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 01:19:17.133 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 01:19:17.133 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 01:19:17.133 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:19:17.133 [2024-12-09 05:13:59.436837] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 01:19:17.133 [2024-12-09 05:13:59.436904] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 01:19:17.133 [2024-12-09 05:13:59.436937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:19:17.133 request: 01:19:17.133 { 01:19:17.133 "nqn": "nqn.2016-06.io.spdk:cnode2", 01:19:17.133 "namespace": { 01:19:17.133 "bdev_name": "Malloc0", 01:19:17.133 "no_auto_visible": false, 01:19:17.133 "hide_metadata": false 01:19:17.133 }, 01:19:17.133 "method": "nvmf_subsystem_add_ns", 01:19:17.133 "req_id": 1 01:19:17.133 } 01:19:17.133 Got JSON-RPC error response 01:19:17.133 response: 01:19:17.133 { 01:19:17.133 "code": -32602, 01:19:17.133 "message": "Invalid parameters" 01:19:17.133 } 01:19:17.133 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:19:17.133 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 01:19:17.133 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 01:19:17.133 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 01:19:17.133 Adding namespace failed - expected result. 01:19:17.133 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 01:19:17.133 test case2: host connect to nvmf target in multiple paths 01:19:17.133 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 01:19:17.133 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 01:19:17.133 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:19:17.133 [2024-12-09 05:13:59.452928] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 01:19:17.133 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:19:17.133 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid=0567fff2-ddaf-4a4f-877d-a2600d7e662b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 01:19:17.392 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid=0567fff2-ddaf-4a4f-877d-a2600d7e662b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 01:19:17.392 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 01:19:17.392 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 01:19:17.392 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 01:19:17.392 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 01:19:17.392 05:13:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 01:19:19.295 05:14:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 01:19:19.295 05:14:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 01:19:19.296 05:14:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 01:19:19.554 05:14:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 01:19:19.554 05:14:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 01:19:19.554 05:14:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 01:19:19.554 05:14:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 01:19:19.554 [global] 01:19:19.554 thread=1 01:19:19.554 invalidate=1 01:19:19.554 rw=write 01:19:19.554 time_based=1 01:19:19.554 runtime=1 01:19:19.554 ioengine=libaio 01:19:19.554 direct=1 01:19:19.554 bs=4096 01:19:19.554 iodepth=1 01:19:19.554 norandommap=0 01:19:19.554 numjobs=1 01:19:19.554 01:19:19.554 verify_dump=1 01:19:19.554 verify_backlog=512 01:19:19.554 verify_state_save=0 01:19:19.554 do_verify=1 01:19:19.554 verify=crc32c-intel 01:19:19.554 [job0] 01:19:19.554 filename=/dev/nvme0n1 01:19:19.554 Could not set queue depth (nvme0n1) 01:19:19.554 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:19:19.554 fio-3.35 01:19:19.554 Starting 1 thread 01:19:20.932 01:19:20.932 job0: (groupid=0, jobs=1): err= 0: pid=66039: Mon Dec 9 05:14:03 2024 01:19:20.932 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 01:19:20.932 slat (nsec): min=6239, max=23804, avg=7384.70, stdev=1139.91 01:19:20.932 clat (usec): min=103, max=352, avg=151.29, stdev=17.40 01:19:20.932 lat (usec): min=109, max=368, avg=158.67, stdev=17.67 01:19:20.932 clat percentiles (usec): 01:19:20.932 | 1.00th=[ 112], 5.00th=[ 121], 10.00th=[ 128], 20.00th=[ 137], 01:19:20.932 | 30.00th=[ 143], 40.00th=[ 149], 50.00th=[ 153], 60.00th=[ 157], 01:19:20.932 | 70.00th=[ 161], 80.00th=[ 165], 90.00th=[ 174], 95.00th=[ 178], 01:19:20.932 | 99.00th=[ 192], 99.50th=[ 200], 99.90th=[ 212], 99.95th=[ 225], 01:19:20.932 | 99.99th=[ 355] 01:19:20.932 write: IOPS=4035, BW=15.8MiB/s (16.5MB/s)(15.8MiB/1001msec); 0 zone resets 01:19:20.932 slat (usec): min=9, max=121, avg=12.16, stdev= 5.85 01:19:20.932 clat (usec): min=64, max=470, avg=92.83, stdev=15.52 01:19:20.932 lat (usec): min=74, max=482, avg=104.99, stdev=17.82 01:19:20.932 clat percentiles (usec): 01:19:20.932 | 1.00th=[ 69], 5.00th=[ 73], 10.00th=[ 77], 20.00th=[ 81], 01:19:20.932 | 30.00th=[ 86], 40.00th=[ 89], 50.00th=[ 93], 60.00th=[ 96], 01:19:20.932 | 70.00th=[ 100], 80.00th=[ 103], 90.00th=[ 109], 95.00th=[ 114], 01:19:20.932 | 99.00th=[ 130], 99.50th=[ 139], 99.90th=[ 163], 99.95th=[ 318], 01:19:20.932 | 99.99th=[ 469] 01:19:20.932 bw ( KiB/s): min=16384, max=16384, per=100.00%, avg=16384.00, stdev= 0.00, samples=1 01:19:20.932 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 01:19:20.932 lat (usec) : 100=37.93%, 250=62.01%, 500=0.05% 01:19:20.932 cpu : usr=2.00%, sys=5.60%, ctx=7624, majf=0, minf=5 01:19:20.932 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:19:20.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:19:20.932 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:19:20.932 issued rwts: total=3584,4040,0,0 short=0,0,0,0 dropped=0,0,0,0 01:19:20.932 latency : target=0, window=0, percentile=100.00%, depth=1 01:19:20.932 01:19:20.932 Run status group 0 (all jobs): 01:19:20.932 READ: bw=14.0MiB/s (14.7MB/s), 14.0MiB/s-14.0MiB/s (14.7MB/s-14.7MB/s), io=14.0MiB (14.7MB), run=1001-1001msec 01:19:20.932 WRITE: bw=15.8MiB/s (16.5MB/s), 15.8MiB/s-15.8MiB/s (16.5MB/s-16.5MB/s), io=15.8MiB (16.5MB), run=1001-1001msec 01:19:20.932 01:19:20.932 Disk stats (read/write): 01:19:20.932 nvme0n1: ios=3280/3584, merge=0/0, ticks=524/353, in_queue=877, util=91.37% 01:19:20.932 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 01:19:20.932 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 01:19:20.932 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 01:19:20.932 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 01:19:20.932 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 01:19:20.932 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 01:19:20.932 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 01:19:20.932 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 01:19:20.932 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 01:19:20.932 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 01:19:20.932 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 01:19:20.932 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 01:19:20.932 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 01:19:20.932 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:19:20.932 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 01:19:20.932 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 01:19:20.932 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:19:20.932 rmmod nvme_tcp 01:19:20.932 rmmod nvme_fabrics 01:19:20.932 rmmod nvme_keyring 01:19:20.932 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:19:20.932 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 01:19:20.932 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 01:19:20.932 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 65953 ']' 01:19:20.932 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 65953 01:19:20.932 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 65953 ']' 01:19:20.932 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 65953 01:19:20.932 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 01:19:20.932 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:19:20.932 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65953 01:19:20.932 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:19:20.932 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:19:20.932 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65953' 01:19:20.932 killing process with pid 65953 01:19:20.932 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 65953 01:19:20.932 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 65953 01:19:21.192 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:19:21.192 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:19:21.192 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:19:21.192 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 01:19:21.192 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 01:19:21.192 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:19:21.192 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 01:19:21.192 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:19:21.192 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:19:21.192 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:19:21.192 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:19:21.192 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:19:21.192 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:19:21.451 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:19:21.451 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:19:21.451 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:19:21.451 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:19:21.451 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:19:21.451 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:19:21.451 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:19:21.451 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:19:21.451 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:19:21.451 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 01:19:21.451 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:19:21.451 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:19:21.451 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:19:21.451 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 01:19:21.451 01:19:21.451 real 0m6.191s 01:19:21.451 user 0m19.332s 01:19:21.451 sys 0m2.041s 01:19:21.451 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 01:19:21.451 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:19:21.451 ************************************ 01:19:21.451 END TEST nvmf_nmic 01:19:21.451 ************************************ 01:19:21.711 05:14:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 01:19:21.711 05:14:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:19:21.711 05:14:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 01:19:21.711 05:14:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 01:19:21.711 ************************************ 01:19:21.711 START TEST nvmf_fio_target 01:19:21.711 ************************************ 01:19:21.711 05:14:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 01:19:21.711 * Looking for test storage... 01:19:21.711 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:19:21.711 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:19:21.711 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:19:21.711 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 01:19:21.711 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:19:21.711 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:19:21.711 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 01:19:21.711 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 01:19:21.711 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 01:19:21.711 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 01:19:21.711 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 01:19:21.711 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 01:19:21.711 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 01:19:21.711 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 01:19:21.711 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 01:19:21.711 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:19:21.711 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 01:19:21.711 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 01:19:21.711 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 01:19:21.711 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:19:21.711 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 01:19:21.711 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 01:19:21.711 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:19:21.711 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 01:19:21.711 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 01:19:21.711 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 01:19:21.711 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 01:19:21.711 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:19:21.711 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 01:19:21.711 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 01:19:21.711 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:19:21.711 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:19:21.711 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 01:19:21.711 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:19:21.711 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:19:21.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:19:21.711 --rc genhtml_branch_coverage=1 01:19:21.711 --rc genhtml_function_coverage=1 01:19:21.711 --rc genhtml_legend=1 01:19:21.711 --rc geninfo_all_blocks=1 01:19:21.711 --rc geninfo_unexecuted_blocks=1 01:19:21.711 01:19:21.711 ' 01:19:21.711 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:19:21.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:19:21.711 --rc genhtml_branch_coverage=1 01:19:21.711 --rc genhtml_function_coverage=1 01:19:21.711 --rc genhtml_legend=1 01:19:21.711 --rc geninfo_all_blocks=1 01:19:21.711 --rc geninfo_unexecuted_blocks=1 01:19:21.711 01:19:21.711 ' 01:19:21.711 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:19:21.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:19:21.711 --rc genhtml_branch_coverage=1 01:19:21.711 --rc genhtml_function_coverage=1 01:19:21.711 --rc genhtml_legend=1 01:19:21.711 --rc geninfo_all_blocks=1 01:19:21.711 --rc geninfo_unexecuted_blocks=1 01:19:21.711 01:19:21.711 ' 01:19:21.711 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:19:21.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:19:21.711 --rc genhtml_branch_coverage=1 01:19:21.711 --rc genhtml_function_coverage=1 01:19:21.711 --rc genhtml_legend=1 01:19:21.711 --rc geninfo_all_blocks=1 01:19:21.711 --rc geninfo_unexecuted_blocks=1 01:19:21.711 01:19:21.711 ' 01:19:21.712 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:19:21.712 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 01:19:21.712 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:19:21.712 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:19:21.712 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:19:21.712 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:19:21.712 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:19:21.712 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:19:21.712 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:19:21.712 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:19:21.712 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:19:21.712 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:19:21.972 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:19:21.972 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:19:21.972 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:19:21.972 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:19:21.972 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:19:21.972 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:19:21.972 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:19:21.972 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 01:19:21.972 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:19:21.972 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:19:21.972 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:19:21.972 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:19:21.972 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:19:21.972 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:19:21.972 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 01:19:21.972 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:19:21.972 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 01:19:21.972 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:19:21.972 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:19:21.972 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:19:21.972 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:19:21.972 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:19:21.972 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:19:21.972 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:19:21.972 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:19:21.972 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:19:21.972 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 01:19:21.972 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 01:19:21.972 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:19:21.972 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:19:21.972 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 01:19:21.972 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:19:21.972 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:19:21.972 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 01:19:21.972 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 01:19:21.972 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 01:19:21.972 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:19:21.972 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:19:21.972 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:19:21.972 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:19:21.972 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:19:21.972 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:19:21.972 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:19:21.972 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:19:21.972 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 01:19:21.972 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:19:21.972 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:19:21.972 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:19:21.972 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:19:21.972 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:19:21.972 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:19:21.972 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:19:21.972 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:19:21.972 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:19:21.972 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:19:21.973 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:19:21.973 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:19:21.973 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:19:21.973 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:19:21.973 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:19:21.973 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:19:21.973 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:19:21.973 Cannot find device "nvmf_init_br" 01:19:21.973 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 01:19:21.973 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:19:21.973 Cannot find device "nvmf_init_br2" 01:19:21.973 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 01:19:21.973 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:19:21.973 Cannot find device "nvmf_tgt_br" 01:19:21.973 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 01:19:21.973 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:19:21.973 Cannot find device "nvmf_tgt_br2" 01:19:21.973 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 01:19:21.973 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:19:21.973 Cannot find device "nvmf_init_br" 01:19:21.973 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 01:19:21.973 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:19:21.973 Cannot find device "nvmf_init_br2" 01:19:21.973 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 01:19:21.973 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:19:21.973 Cannot find device "nvmf_tgt_br" 01:19:21.973 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 01:19:21.973 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:19:21.973 Cannot find device "nvmf_tgt_br2" 01:19:21.973 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 01:19:21.973 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:19:21.973 Cannot find device "nvmf_br" 01:19:21.973 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 01:19:21.973 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:19:21.973 Cannot find device "nvmf_init_if" 01:19:21.973 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 01:19:21.973 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:19:21.973 Cannot find device "nvmf_init_if2" 01:19:21.973 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 01:19:21.973 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:19:21.973 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:19:21.973 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 01:19:21.973 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:19:21.973 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:19:21.973 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 01:19:21.973 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:19:21.973 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:19:21.973 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:19:21.973 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:19:21.973 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:19:22.233 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:19:22.233 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:19:22.233 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:19:22.233 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:19:22.233 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:19:22.233 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:19:22.233 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:19:22.233 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:19:22.233 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:19:22.233 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:19:22.233 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:19:22.233 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:19:22.233 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:19:22.233 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:19:22.233 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:19:22.233 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:19:22.233 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:19:22.233 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:19:22.233 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:19:22.233 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:19:22.233 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:19:22.233 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:19:22.234 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:19:22.234 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:19:22.234 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:19:22.234 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:19:22.234 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:19:22.234 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:19:22.234 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:19:22.234 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 01:19:22.234 01:19:22.234 --- 10.0.0.3 ping statistics --- 01:19:22.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:19:22.234 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 01:19:22.234 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:19:22.234 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:19:22.234 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.036 ms 01:19:22.234 01:19:22.234 --- 10.0.0.4 ping statistics --- 01:19:22.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:19:22.234 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 01:19:22.234 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:19:22.234 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:19:22.234 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 01:19:22.234 01:19:22.234 --- 10.0.0.1 ping statistics --- 01:19:22.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:19:22.234 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 01:19:22.234 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:19:22.234 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:19:22.234 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.037 ms 01:19:22.234 01:19:22.234 --- 10.0.0.2 ping statistics --- 01:19:22.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:19:22.234 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 01:19:22.234 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:19:22.234 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 01:19:22.234 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:19:22.234 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:19:22.234 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:19:22.234 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:19:22.234 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:19:22.234 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:19:22.234 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:19:22.234 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 01:19:22.234 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:19:22.234 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 01:19:22.234 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 01:19:22.234 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=66278 01:19:22.234 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 01:19:22.234 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 66278 01:19:22.234 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 66278 ']' 01:19:22.234 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:19:22.234 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 01:19:22.234 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:19:22.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:19:22.234 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 01:19:22.234 05:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 01:19:22.234 [2024-12-09 05:14:04.669563] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:19:22.234 [2024-12-09 05:14:04.669631] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:19:22.493 [2024-12-09 05:14:04.822175] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:19:22.493 [2024-12-09 05:14:04.877247] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:19:22.493 [2024-12-09 05:14:04.877298] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:19:22.493 [2024-12-09 05:14:04.877305] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:19:22.493 [2024-12-09 05:14:04.877309] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:19:22.493 [2024-12-09 05:14:04.877313] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:19:22.493 [2024-12-09 05:14:04.878198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:19:22.493 [2024-12-09 05:14:04.878285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:19:22.493 [2024-12-09 05:14:04.878463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:19:22.493 [2024-12-09 05:14:04.878466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:19:22.493 [2024-12-09 05:14:04.919410] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:19:23.505 05:14:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:19:23.505 05:14:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 01:19:23.505 05:14:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:19:23.505 05:14:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 01:19:23.505 05:14:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 01:19:23.505 05:14:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:19:23.505 05:14:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 01:19:23.505 [2024-12-09 05:14:05.841145] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:19:23.505 05:14:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:19:23.801 05:14:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 01:19:23.801 05:14:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:19:24.060 05:14:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 01:19:24.060 05:14:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:19:24.320 05:14:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 01:19:24.320 05:14:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:19:24.580 05:14:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 01:19:24.580 05:14:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 01:19:24.839 05:14:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:19:24.839 05:14:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 01:19:24.839 05:14:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:19:25.098 05:14:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 01:19:25.098 05:14:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:19:25.357 05:14:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 01:19:25.357 05:14:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 01:19:25.617 05:14:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 01:19:25.876 05:14:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 01:19:25.876 05:14:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:19:26.136 05:14:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 01:19:26.136 05:14:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 01:19:26.395 05:14:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:19:26.653 [2024-12-09 05:14:08.897272] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:19:26.653 05:14:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 01:19:26.912 05:14:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 01:19:27.170 05:14:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid=0567fff2-ddaf-4a4f-877d-a2600d7e662b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 01:19:27.170 05:14:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 01:19:27.170 05:14:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 01:19:27.170 05:14:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 01:19:27.170 05:14:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 01:19:27.170 05:14:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 01:19:27.170 05:14:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 01:19:29.075 05:14:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 01:19:29.075 05:14:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 01:19:29.075 05:14:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 01:19:29.075 05:14:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 01:19:29.075 05:14:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 01:19:29.075 05:14:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 01:19:29.075 05:14:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 01:19:29.333 [global] 01:19:29.333 thread=1 01:19:29.333 invalidate=1 01:19:29.333 rw=write 01:19:29.333 time_based=1 01:19:29.333 runtime=1 01:19:29.333 ioengine=libaio 01:19:29.333 direct=1 01:19:29.333 bs=4096 01:19:29.333 iodepth=1 01:19:29.333 norandommap=0 01:19:29.333 numjobs=1 01:19:29.333 01:19:29.333 verify_dump=1 01:19:29.333 verify_backlog=512 01:19:29.333 verify_state_save=0 01:19:29.333 do_verify=1 01:19:29.333 verify=crc32c-intel 01:19:29.333 [job0] 01:19:29.333 filename=/dev/nvme0n1 01:19:29.333 [job1] 01:19:29.333 filename=/dev/nvme0n2 01:19:29.333 [job2] 01:19:29.333 filename=/dev/nvme0n3 01:19:29.333 [job3] 01:19:29.333 filename=/dev/nvme0n4 01:19:29.333 Could not set queue depth (nvme0n1) 01:19:29.333 Could not set queue depth (nvme0n2) 01:19:29.333 Could not set queue depth (nvme0n3) 01:19:29.333 Could not set queue depth (nvme0n4) 01:19:29.333 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:19:29.333 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:19:29.333 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:19:29.333 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:19:29.333 fio-3.35 01:19:29.333 Starting 4 threads 01:19:30.710 01:19:30.710 job0: (groupid=0, jobs=1): err= 0: pid=66457: Mon Dec 9 05:14:12 2024 01:19:30.710 read: IOPS=3030, BW=11.8MiB/s (12.4MB/s)(11.9MiB/1001msec) 01:19:30.710 slat (usec): min=5, max=119, avg= 9.22, stdev= 5.37 01:19:30.710 clat (usec): min=108, max=393, avg=173.72, stdev=57.71 01:19:30.710 lat (usec): min=115, max=405, avg=182.94, stdev=59.55 01:19:30.710 clat percentiles (usec): 01:19:30.710 | 1.00th=[ 119], 5.00th=[ 124], 10.00th=[ 127], 20.00th=[ 133], 01:19:30.710 | 30.00th=[ 137], 40.00th=[ 141], 50.00th=[ 147], 60.00th=[ 157], 01:19:30.710 | 70.00th=[ 182], 80.00th=[ 225], 90.00th=[ 269], 95.00th=[ 302], 01:19:30.710 | 99.00th=[ 338], 99.50th=[ 347], 99.90th=[ 371], 99.95th=[ 388], 01:19:30.710 | 99.99th=[ 392] 01:19:30.710 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 01:19:30.710 slat (usec): min=7, max=243, avg=14.45, stdev= 9.47 01:19:30.710 clat (usec): min=73, max=520, avg=127.99, stdev=44.81 01:19:30.710 lat (usec): min=83, max=534, avg=142.45, stdev=49.77 01:19:30.710 clat percentiles (usec): 01:19:30.710 | 1.00th=[ 87], 5.00th=[ 91], 10.00th=[ 94], 20.00th=[ 99], 01:19:30.710 | 30.00th=[ 102], 40.00th=[ 106], 50.00th=[ 111], 60.00th=[ 117], 01:19:30.710 | 70.00th=[ 125], 80.00th=[ 145], 90.00th=[ 210], 95.00th=[ 235], 01:19:30.710 | 99.00th=[ 265], 99.50th=[ 273], 99.90th=[ 289], 99.95th=[ 314], 01:19:30.710 | 99.99th=[ 523] 01:19:30.710 bw ( KiB/s): min=16384, max=16384, per=33.73%, avg=16384.00, stdev= 0.00, samples=1 01:19:30.710 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 01:19:30.710 lat (usec) : 100=11.89%, 250=79.82%, 500=8.27%, 750=0.02% 01:19:30.710 cpu : usr=1.10%, sys=6.30%, ctx=6107, majf=0, minf=13 01:19:30.710 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:19:30.710 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:19:30.710 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:19:30.710 issued rwts: total=3034,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 01:19:30.710 latency : target=0, window=0, percentile=100.00%, depth=1 01:19:30.710 job1: (groupid=0, jobs=1): err= 0: pid=66461: Mon Dec 9 05:14:12 2024 01:19:30.710 read: IOPS=2949, BW=11.5MiB/s (12.1MB/s)(11.6MiB/1005msec) 01:19:30.710 slat (nsec): min=5988, max=51568, avg=8644.66, stdev=3962.14 01:19:30.710 clat (usec): min=113, max=4530, avg=181.39, stdev=154.45 01:19:30.710 lat (usec): min=119, max=4545, avg=190.03, stdev=155.61 01:19:30.710 clat percentiles (usec): 01:19:30.710 | 1.00th=[ 120], 5.00th=[ 126], 10.00th=[ 129], 20.00th=[ 135], 01:19:30.710 | 30.00th=[ 139], 40.00th=[ 143], 50.00th=[ 149], 60.00th=[ 155], 01:19:30.710 | 70.00th=[ 176], 80.00th=[ 229], 90.00th=[ 273], 95.00th=[ 302], 01:19:30.710 | 99.00th=[ 351], 99.50th=[ 367], 99.90th=[ 3720], 99.95th=[ 3851], 01:19:30.710 | 99.99th=[ 4555] 01:19:30.710 write: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec); 0 zone resets 01:19:30.710 slat (usec): min=9, max=231, avg=14.39, stdev= 9.35 01:19:30.710 clat (usec): min=80, max=1275, avg=126.78, stdev=49.56 01:19:30.710 lat (usec): min=90, max=1285, avg=141.17, stdev=53.97 01:19:30.710 clat percentiles (usec): 01:19:30.710 | 1.00th=[ 86], 5.00th=[ 91], 10.00th=[ 94], 20.00th=[ 98], 01:19:30.710 | 30.00th=[ 101], 40.00th=[ 105], 50.00th=[ 110], 60.00th=[ 114], 01:19:30.710 | 70.00th=[ 122], 80.00th=[ 143], 90.00th=[ 210], 95.00th=[ 237], 01:19:30.710 | 99.00th=[ 265], 99.50th=[ 273], 99.90th=[ 293], 99.95th=[ 529], 01:19:30.710 | 99.99th=[ 1270] 01:19:30.710 bw ( KiB/s): min= 8175, max=16384, per=25.28%, avg=12279.50, stdev=5804.64, samples=2 01:19:30.710 iops : min= 2043, max= 4096, avg=3069.50, stdev=1451.69, samples=2 01:19:30.710 lat (usec) : 100=13.40%, 250=77.93%, 500=8.47%, 750=0.05% 01:19:30.710 lat (msec) : 2=0.05%, 4=0.08%, 10=0.02% 01:19:30.710 cpu : usr=1.99%, sys=5.08%, ctx=6043, majf=0, minf=13 01:19:30.710 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:19:30.710 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:19:30.710 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:19:30.710 issued rwts: total=2964,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 01:19:30.710 latency : target=0, window=0, percentile=100.00%, depth=1 01:19:30.710 job2: (groupid=0, jobs=1): err= 0: pid=66463: Mon Dec 9 05:14:12 2024 01:19:30.710 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 01:19:30.710 slat (usec): min=6, max=236, avg=10.55, stdev= 7.71 01:19:30.710 clat (usec): min=122, max=1695, avg=187.68, stdev=61.67 01:19:30.710 lat (usec): min=129, max=1702, avg=198.22, stdev=62.26 01:19:30.710 clat percentiles (usec): 01:19:30.710 | 1.00th=[ 133], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 149], 01:19:30.710 | 30.00th=[ 153], 40.00th=[ 159], 50.00th=[ 165], 60.00th=[ 174], 01:19:30.711 | 70.00th=[ 188], 80.00th=[ 235], 90.00th=[ 273], 95.00th=[ 306], 01:19:30.711 | 99.00th=[ 355], 99.50th=[ 367], 99.90th=[ 412], 99.95th=[ 586], 01:19:30.711 | 99.99th=[ 1696] 01:19:30.711 write: IOPS=2985, BW=11.7MiB/s (12.2MB/s)(11.7MiB/1001msec); 0 zone resets 01:19:30.711 slat (usec): min=9, max=123, avg=18.05, stdev=11.75 01:19:30.711 clat (usec): min=88, max=2258, avg=144.28, stdev=58.37 01:19:30.711 lat (usec): min=99, max=2280, avg=162.33, stdev=62.65 01:19:30.711 clat percentiles (usec): 01:19:30.711 | 1.00th=[ 96], 5.00th=[ 102], 10.00th=[ 106], 20.00th=[ 111], 01:19:30.711 | 30.00th=[ 116], 40.00th=[ 120], 50.00th=[ 126], 60.00th=[ 133], 01:19:30.711 | 70.00th=[ 149], 80.00th=[ 186], 90.00th=[ 215], 95.00th=[ 237], 01:19:30.711 | 99.00th=[ 269], 99.50th=[ 281], 99.90th=[ 310], 99.95th=[ 326], 01:19:30.711 | 99.99th=[ 2245] 01:19:30.711 bw ( KiB/s): min=13656, max=13656, per=28.11%, avg=13656.00, stdev= 0.00, samples=1 01:19:30.711 iops : min= 3414, max= 3414, avg=3414.00, stdev= 0.00, samples=1 01:19:30.711 lat (usec) : 100=1.69%, 250=89.53%, 500=8.72%, 750=0.02% 01:19:30.711 lat (msec) : 2=0.02%, 4=0.02% 01:19:30.711 cpu : usr=1.00%, sys=7.10%, ctx=5548, majf=0, minf=8 01:19:30.711 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:19:30.711 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:19:30.711 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:19:30.711 issued rwts: total=2560,2988,0,0 short=0,0,0,0 dropped=0,0,0,0 01:19:30.711 latency : target=0, window=0, percentile=100.00%, depth=1 01:19:30.711 job3: (groupid=0, jobs=1): err= 0: pid=66464: Mon Dec 9 05:14:12 2024 01:19:30.711 read: IOPS=2595, BW=10.1MiB/s (10.6MB/s)(10.1MiB/1001msec) 01:19:30.711 slat (nsec): min=6134, max=54678, avg=9612.15, stdev=4848.58 01:19:30.711 clat (usec): min=123, max=769, avg=183.23, stdev=69.90 01:19:30.711 lat (usec): min=129, max=777, avg=192.84, stdev=72.50 01:19:30.711 clat percentiles (usec): 01:19:30.711 | 1.00th=[ 130], 5.00th=[ 135], 10.00th=[ 139], 20.00th=[ 143], 01:19:30.711 | 30.00th=[ 147], 40.00th=[ 153], 50.00th=[ 157], 60.00th=[ 165], 01:19:30.711 | 70.00th=[ 178], 80.00th=[ 225], 90.00th=[ 258], 95.00th=[ 293], 01:19:30.711 | 99.00th=[ 478], 99.50th=[ 676], 99.90th=[ 750], 99.95th=[ 758], 01:19:30.711 | 99.99th=[ 766] 01:19:30.711 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 01:19:30.711 slat (usec): min=9, max=175, avg=16.81, stdev=12.40 01:19:30.711 clat (usec): min=88, max=483, avg=143.46, stdev=44.98 01:19:30.711 lat (usec): min=98, max=504, avg=160.27, stdev=50.53 01:19:30.711 clat percentiles (usec): 01:19:30.711 | 1.00th=[ 96], 5.00th=[ 101], 10.00th=[ 104], 20.00th=[ 110], 01:19:30.711 | 30.00th=[ 115], 40.00th=[ 119], 50.00th=[ 125], 60.00th=[ 135], 01:19:30.711 | 70.00th=[ 157], 80.00th=[ 182], 90.00th=[ 206], 95.00th=[ 235], 01:19:30.711 | 99.00th=[ 285], 99.50th=[ 302], 99.90th=[ 363], 99.95th=[ 478], 01:19:30.711 | 99.99th=[ 486] 01:19:30.711 bw ( KiB/s): min=14944, max=14944, per=30.77%, avg=14944.00, stdev= 0.00, samples=1 01:19:30.711 iops : min= 3736, max= 3736, avg=3736.00, stdev= 0.00, samples=1 01:19:30.711 lat (usec) : 100=2.33%, 250=90.49%, 500=6.81%, 750=0.32%, 1000=0.05% 01:19:30.711 cpu : usr=1.60%, sys=5.80%, ctx=5670, majf=0, minf=14 01:19:30.711 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:19:30.711 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:19:30.711 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:19:30.711 issued rwts: total=2598,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 01:19:30.711 latency : target=0, window=0, percentile=100.00%, depth=1 01:19:30.711 01:19:30.711 Run status group 0 (all jobs): 01:19:30.711 READ: bw=43.4MiB/s (45.5MB/s), 9.99MiB/s-11.8MiB/s (10.5MB/s-12.4MB/s), io=43.6MiB (45.7MB), run=1001-1005msec 01:19:30.711 WRITE: bw=47.4MiB/s (49.7MB/s), 11.7MiB/s-12.0MiB/s (12.2MB/s-12.6MB/s), io=47.7MiB (50.0MB), run=1001-1005msec 01:19:30.711 01:19:30.711 Disk stats (read/write): 01:19:30.711 nvme0n1: ios=2610/2976, merge=0/0, ticks=444/396, in_queue=840, util=86.96% 01:19:30.711 nvme0n2: ios=2601/2982, merge=0/0, ticks=419/389, in_queue=808, util=87.54% 01:19:30.711 nvme0n3: ios=2369/2560, merge=0/0, ticks=436/360, in_queue=796, util=88.88% 01:19:30.711 nvme0n4: ios=2413/2560, merge=0/0, ticks=435/362, in_queue=797, util=89.72% 01:19:30.711 05:14:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 01:19:30.711 [global] 01:19:30.711 thread=1 01:19:30.711 invalidate=1 01:19:30.711 rw=randwrite 01:19:30.711 time_based=1 01:19:30.711 runtime=1 01:19:30.711 ioengine=libaio 01:19:30.711 direct=1 01:19:30.711 bs=4096 01:19:30.711 iodepth=1 01:19:30.711 norandommap=0 01:19:30.711 numjobs=1 01:19:30.711 01:19:30.711 verify_dump=1 01:19:30.711 verify_backlog=512 01:19:30.711 verify_state_save=0 01:19:30.711 do_verify=1 01:19:30.711 verify=crc32c-intel 01:19:30.711 [job0] 01:19:30.711 filename=/dev/nvme0n1 01:19:30.711 [job1] 01:19:30.711 filename=/dev/nvme0n2 01:19:30.711 [job2] 01:19:30.711 filename=/dev/nvme0n3 01:19:30.711 [job3] 01:19:30.711 filename=/dev/nvme0n4 01:19:30.711 Could not set queue depth (nvme0n1) 01:19:30.711 Could not set queue depth (nvme0n2) 01:19:30.711 Could not set queue depth (nvme0n3) 01:19:30.711 Could not set queue depth (nvme0n4) 01:19:30.971 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:19:30.971 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:19:30.971 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:19:30.971 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:19:30.971 fio-3.35 01:19:30.971 Starting 4 threads 01:19:31.910 01:19:31.910 job0: (groupid=0, jobs=1): err= 0: pid=66518: Mon Dec 9 05:14:14 2024 01:19:31.910 read: IOPS=2679, BW=10.5MiB/s (11.0MB/s)(10.5MiB/1001msec) 01:19:31.910 slat (nsec): min=5866, max=74044, avg=7130.77, stdev=2365.41 01:19:31.910 clat (usec): min=113, max=2001, avg=190.54, stdev=63.98 01:19:31.910 lat (usec): min=119, max=2008, avg=197.67, stdev=64.28 01:19:31.910 clat percentiles (usec): 01:19:31.910 | 1.00th=[ 122], 5.00th=[ 128], 10.00th=[ 133], 20.00th=[ 139], 01:19:31.910 | 30.00th=[ 147], 40.00th=[ 157], 50.00th=[ 182], 60.00th=[ 200], 01:19:31.910 | 70.00th=[ 217], 80.00th=[ 239], 90.00th=[ 277], 95.00th=[ 289], 01:19:31.910 | 99.00th=[ 314], 99.50th=[ 322], 99.90th=[ 359], 99.95th=[ 445], 01:19:31.910 | 99.99th=[ 2008] 01:19:31.910 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 01:19:31.911 slat (usec): min=9, max=126, avg=11.98, stdev= 5.76 01:19:31.911 clat (usec): min=69, max=302, avg=139.14, stdev=43.11 01:19:31.911 lat (usec): min=79, max=385, avg=151.12, stdev=44.99 01:19:31.911 clat percentiles (usec): 01:19:31.911 | 1.00th=[ 83], 5.00th=[ 90], 10.00th=[ 94], 20.00th=[ 99], 01:19:31.911 | 30.00th=[ 105], 40.00th=[ 114], 50.00th=[ 127], 60.00th=[ 145], 01:19:31.911 | 70.00th=[ 163], 80.00th=[ 184], 90.00th=[ 202], 95.00th=[ 219], 01:19:31.911 | 99.00th=[ 243], 99.50th=[ 251], 99.90th=[ 262], 99.95th=[ 277], 01:19:31.911 | 99.99th=[ 302] 01:19:31.911 bw ( KiB/s): min=10864, max=10864, per=28.55%, avg=10864.00, stdev= 0.00, samples=1 01:19:31.911 iops : min= 2716, max= 2716, avg=2716.00, stdev= 0.00, samples=1 01:19:31.911 lat (usec) : 100=11.97%, 250=79.75%, 500=8.26% 01:19:31.911 lat (msec) : 4=0.02% 01:19:31.911 cpu : usr=1.00%, sys=4.70%, ctx=5762, majf=0, minf=13 01:19:31.911 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:19:31.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:19:31.911 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:19:31.911 issued rwts: total=2682,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 01:19:31.911 latency : target=0, window=0, percentile=100.00%, depth=1 01:19:31.911 job1: (groupid=0, jobs=1): err= 0: pid=66519: Mon Dec 9 05:14:14 2024 01:19:31.911 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 01:19:31.911 slat (nsec): min=6348, max=91055, avg=16885.57, stdev=9979.45 01:19:31.911 clat (usec): min=198, max=7933, avg=309.35, stdev=242.64 01:19:31.911 lat (usec): min=204, max=7968, avg=326.23, stdev=245.15 01:19:31.911 clat percentiles (usec): 01:19:31.911 | 1.00th=[ 210], 5.00th=[ 223], 10.00th=[ 231], 20.00th=[ 243], 01:19:31.911 | 30.00th=[ 255], 40.00th=[ 265], 50.00th=[ 281], 60.00th=[ 302], 01:19:31.911 | 70.00th=[ 330], 80.00th=[ 363], 90.00th=[ 388], 95.00th=[ 412], 01:19:31.911 | 99.00th=[ 519], 99.50th=[ 578], 99.90th=[ 3818], 99.95th=[ 7963], 01:19:31.911 | 99.99th=[ 7963] 01:19:31.911 write: IOPS=1877, BW=7508KiB/s (7689kB/s)(7516KiB/1001msec); 0 zone resets 01:19:31.911 slat (usec): min=9, max=130, avg=28.92, stdev=17.32 01:19:31.911 clat (usec): min=100, max=2720, avg=232.42, stdev=96.31 01:19:31.911 lat (usec): min=115, max=2778, avg=261.34, stdev=105.39 01:19:31.911 clat percentiles (usec): 01:19:31.911 | 1.00th=[ 115], 5.00th=[ 149], 10.00th=[ 174], 20.00th=[ 186], 01:19:31.911 | 30.00th=[ 194], 40.00th=[ 204], 50.00th=[ 217], 60.00th=[ 231], 01:19:31.911 | 70.00th=[ 249], 80.00th=[ 273], 90.00th=[ 314], 95.00th=[ 343], 01:19:31.911 | 99.00th=[ 478], 99.50th=[ 506], 99.90th=[ 1680], 99.95th=[ 2737], 01:19:31.911 | 99.99th=[ 2737] 01:19:31.911 bw ( KiB/s): min= 8192, max= 8192, per=21.53%, avg=8192.00, stdev= 0.00, samples=1 01:19:31.911 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 01:19:31.911 lat (usec) : 250=50.89%, 500=48.08%, 750=0.79%, 1000=0.03% 01:19:31.911 lat (msec) : 2=0.09%, 4=0.09%, 10=0.03% 01:19:31.911 cpu : usr=1.80%, sys=6.30%, ctx=3415, majf=0, minf=13 01:19:31.911 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:19:31.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:19:31.911 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:19:31.911 issued rwts: total=1536,1879,0,0 short=0,0,0,0 dropped=0,0,0,0 01:19:31.911 latency : target=0, window=0, percentile=100.00%, depth=1 01:19:31.911 job2: (groupid=0, jobs=1): err= 0: pid=66520: Mon Dec 9 05:14:14 2024 01:19:31.911 read: IOPS=2295, BW=9183KiB/s (9403kB/s)(9192KiB/1001msec) 01:19:31.911 slat (nsec): min=6020, max=34620, avg=8447.75, stdev=2062.51 01:19:31.911 clat (usec): min=122, max=722, avg=225.07, stdev=69.93 01:19:31.911 lat (usec): min=129, max=732, avg=233.52, stdev=71.05 01:19:31.911 clat percentiles (usec): 01:19:31.911 | 1.00th=[ 133], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 151], 01:19:31.911 | 30.00th=[ 159], 40.00th=[ 174], 50.00th=[ 253], 60.00th=[ 265], 01:19:31.911 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 297], 95.00th=[ 310], 01:19:31.911 | 99.00th=[ 396], 99.50th=[ 510], 99.90th=[ 562], 99.95th=[ 676], 01:19:31.911 | 99.99th=[ 725] 01:19:31.911 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 01:19:31.911 slat (usec): min=9, max=111, avg=14.28, stdev= 6.31 01:19:31.911 clat (usec): min=86, max=1666, avg=164.80, stdev=54.87 01:19:31.911 lat (usec): min=96, max=1680, avg=179.08, stdev=57.77 01:19:31.911 clat percentiles (usec): 01:19:31.911 | 1.00th=[ 95], 5.00th=[ 100], 10.00th=[ 104], 20.00th=[ 111], 01:19:31.911 | 30.00th=[ 119], 40.00th=[ 155], 50.00th=[ 182], 60.00th=[ 192], 01:19:31.911 | 70.00th=[ 198], 80.00th=[ 204], 90.00th=[ 215], 95.00th=[ 225], 01:19:31.911 | 99.00th=[ 249], 99.50th=[ 265], 99.90th=[ 371], 99.95th=[ 441], 01:19:31.911 | 99.99th=[ 1663] 01:19:31.911 bw ( KiB/s): min= 8192, max= 8192, per=21.53%, avg=8192.00, stdev= 0.00, samples=1 01:19:31.911 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 01:19:31.911 lat (usec) : 100=2.61%, 250=72.27%, 500=24.78%, 750=0.31% 01:19:31.911 lat (msec) : 2=0.02% 01:19:31.911 cpu : usr=1.50%, sys=3.80%, ctx=4858, majf=0, minf=3 01:19:31.911 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:19:31.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:19:31.911 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:19:31.911 issued rwts: total=2298,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 01:19:31.911 latency : target=0, window=0, percentile=100.00%, depth=1 01:19:31.911 job3: (groupid=0, jobs=1): err= 0: pid=66521: Mon Dec 9 05:14:14 2024 01:19:31.911 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 01:19:31.911 slat (nsec): min=8814, max=83663, avg=21659.22, stdev=10945.15 01:19:31.911 clat (usec): min=185, max=578, avg=290.67, stdev=61.95 01:19:31.911 lat (usec): min=205, max=608, avg=312.33, stdev=68.56 01:19:31.911 clat percentiles (usec): 01:19:31.911 | 1.00th=[ 204], 5.00th=[ 219], 10.00th=[ 227], 20.00th=[ 237], 01:19:31.911 | 30.00th=[ 247], 40.00th=[ 260], 50.00th=[ 273], 60.00th=[ 289], 01:19:31.911 | 70.00th=[ 318], 80.00th=[ 351], 90.00th=[ 379], 95.00th=[ 404], 01:19:31.911 | 99.00th=[ 457], 99.50th=[ 478], 99.90th=[ 545], 99.95th=[ 578], 01:19:31.911 | 99.99th=[ 578] 01:19:31.911 write: IOPS=2009, BW=8040KiB/s (8233kB/s)(8048KiB/1001msec); 0 zone resets 01:19:31.911 slat (usec): min=12, max=154, avg=33.29, stdev=14.67 01:19:31.911 clat (usec): min=100, max=1037, avg=220.82, stdev=58.35 01:19:31.911 lat (usec): min=115, max=1081, avg=254.11, stdev=67.21 01:19:31.911 clat percentiles (usec): 01:19:31.911 | 1.00th=[ 116], 5.00th=[ 149], 10.00th=[ 163], 20.00th=[ 176], 01:19:31.911 | 30.00th=[ 186], 40.00th=[ 196], 50.00th=[ 210], 60.00th=[ 227], 01:19:31.911 | 70.00th=[ 243], 80.00th=[ 265], 90.00th=[ 302], 95.00th=[ 330], 01:19:31.911 | 99.00th=[ 363], 99.50th=[ 375], 99.90th=[ 498], 99.95th=[ 537], 01:19:31.911 | 99.99th=[ 1037] 01:19:31.911 bw ( KiB/s): min= 8192, max= 8192, per=21.53%, avg=8192.00, stdev= 0.00, samples=1 01:19:31.911 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 01:19:31.911 lat (usec) : 250=55.86%, 500=43.88%, 750=0.23% 01:19:31.911 lat (msec) : 2=0.03% 01:19:31.911 cpu : usr=1.70%, sys=8.30%, ctx=3548, majf=0, minf=17 01:19:31.911 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:19:31.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:19:31.911 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:19:31.911 issued rwts: total=1536,2012,0,0 short=0,0,0,0 dropped=0,0,0,0 01:19:31.911 latency : target=0, window=0, percentile=100.00%, depth=1 01:19:31.911 01:19:31.911 Run status group 0 (all jobs): 01:19:31.911 READ: bw=31.4MiB/s (32.9MB/s), 6138KiB/s-10.5MiB/s (6285kB/s-11.0MB/s), io=31.5MiB (33.0MB), run=1001-1001msec 01:19:31.911 WRITE: bw=37.2MiB/s (39.0MB/s), 7508KiB/s-12.0MiB/s (7689kB/s-12.6MB/s), io=37.2MiB (39.0MB), run=1001-1001msec 01:19:31.911 01:19:31.911 Disk stats (read/write): 01:19:31.911 nvme0n1: ios=2317/2560, merge=0/0, ticks=486/388, in_queue=874, util=90.98% 01:19:31.911 nvme0n2: ios=1410/1536, merge=0/0, ticks=446/384, in_queue=830, util=89.34% 01:19:31.911 nvme0n3: ios=1994/2048, merge=0/0, ticks=502/374, in_queue=876, util=91.57% 01:19:31.911 nvme0n4: ios=1473/1536, merge=0/0, ticks=478/379, in_queue=857, util=91.45% 01:19:31.911 05:14:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 01:19:32.171 [global] 01:19:32.171 thread=1 01:19:32.171 invalidate=1 01:19:32.171 rw=write 01:19:32.171 time_based=1 01:19:32.171 runtime=1 01:19:32.171 ioengine=libaio 01:19:32.171 direct=1 01:19:32.171 bs=4096 01:19:32.171 iodepth=128 01:19:32.171 norandommap=0 01:19:32.171 numjobs=1 01:19:32.171 01:19:32.171 verify_dump=1 01:19:32.171 verify_backlog=512 01:19:32.171 verify_state_save=0 01:19:32.171 do_verify=1 01:19:32.171 verify=crc32c-intel 01:19:32.171 [job0] 01:19:32.171 filename=/dev/nvme0n1 01:19:32.171 [job1] 01:19:32.171 filename=/dev/nvme0n2 01:19:32.171 [job2] 01:19:32.171 filename=/dev/nvme0n3 01:19:32.171 [job3] 01:19:32.171 filename=/dev/nvme0n4 01:19:32.171 Could not set queue depth (nvme0n1) 01:19:32.171 Could not set queue depth (nvme0n2) 01:19:32.171 Could not set queue depth (nvme0n3) 01:19:32.171 Could not set queue depth (nvme0n4) 01:19:32.171 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:19:32.171 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:19:32.171 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:19:32.171 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:19:32.171 fio-3.35 01:19:32.171 Starting 4 threads 01:19:33.581 01:19:33.581 job0: (groupid=0, jobs=1): err= 0: pid=66580: Mon Dec 9 05:14:15 2024 01:19:33.581 read: IOPS=5626, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1001msec) 01:19:33.581 slat (usec): min=15, max=3904, avg=79.88, stdev=326.79 01:19:33.581 clat (usec): min=7826, max=14896, avg=11019.35, stdev=724.38 01:19:33.581 lat (usec): min=9336, max=14927, avg=11099.23, stdev=661.13 01:19:33.581 clat percentiles (usec): 01:19:33.581 | 1.00th=[ 8979], 5.00th=[ 9896], 10.00th=[10290], 20.00th=[10552], 01:19:33.581 | 30.00th=[10683], 40.00th=[10814], 50.00th=[10945], 60.00th=[11076], 01:19:33.581 | 70.00th=[11338], 80.00th=[11600], 90.00th=[11863], 95.00th=[12125], 01:19:33.581 | 99.00th=[12911], 99.50th=[13698], 99.90th=[14877], 99.95th=[14877], 01:19:33.581 | 99.99th=[14877] 01:19:33.581 write: IOPS=6107, BW=23.9MiB/s (25.0MB/s)(23.9MiB/1001msec); 0 zone resets 01:19:33.581 slat (usec): min=19, max=2411, avg=80.15, stdev=265.34 01:19:33.581 clat (usec): min=260, max=13597, avg=10532.92, stdev=969.94 01:19:33.581 lat (usec): min=1764, max=13628, avg=10613.08, stdev=939.74 01:19:33.581 clat percentiles (usec): 01:19:33.581 | 1.00th=[ 5538], 5.00th=[ 9372], 10.00th=[ 9765], 20.00th=[10159], 01:19:33.581 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10552], 60.00th=[10814], 01:19:33.581 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11338], 95.00th=[11469], 01:19:33.581 | 99.00th=[11863], 99.50th=[11994], 99.90th=[12649], 99.95th=[12911], 01:19:33.581 | 99.99th=[13566] 01:19:33.581 bw ( KiB/s): min=24576, max=24576, per=34.50%, avg=24576.00, stdev= 0.00, samples=1 01:19:33.581 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=1 01:19:33.581 lat (usec) : 500=0.01% 01:19:33.581 lat (msec) : 2=0.06%, 4=0.21%, 10=10.93%, 20=88.79% 01:19:33.581 cpu : usr=5.80%, sys=23.30%, ctx=446, majf=0, minf=2 01:19:33.581 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 01:19:33.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:19:33.581 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:19:33.581 issued rwts: total=5632,6114,0,0 short=0,0,0,0 dropped=0,0,0,0 01:19:33.581 latency : target=0, window=0, percentile=100.00%, depth=128 01:19:33.581 job1: (groupid=0, jobs=1): err= 0: pid=66581: Mon Dec 9 05:14:15 2024 01:19:33.581 read: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec) 01:19:33.581 slat (usec): min=7, max=2475, avg=79.70, stdev=325.15 01:19:33.581 clat (usec): min=7822, max=13818, avg=10979.15, stdev=609.99 01:19:33.581 lat (usec): min=8816, max=13847, avg=11058.86, stdev=528.10 01:19:33.581 clat percentiles (usec): 01:19:33.581 | 1.00th=[ 8848], 5.00th=[ 9896], 10.00th=[10421], 20.00th=[10552], 01:19:33.581 | 30.00th=[10683], 40.00th=[10814], 50.00th=[11076], 60.00th=[11076], 01:19:33.581 | 70.00th=[11338], 80.00th=[11469], 90.00th=[11731], 95.00th=[11863], 01:19:33.581 | 99.00th=[12387], 99.50th=[12518], 99.90th=[13173], 99.95th=[13829], 01:19:33.581 | 99.99th=[13829] 01:19:33.581 write: IOPS=6100, BW=23.8MiB/s (25.0MB/s)(23.9MiB/1002msec); 0 zone resets 01:19:33.581 slat (usec): min=18, max=5307, avg=80.65, stdev=274.84 01:19:33.581 clat (usec): min=275, max=14853, avg=10594.67, stdev=1070.56 01:19:33.581 lat (usec): min=1876, max=14883, avg=10675.32, stdev=1040.50 01:19:33.581 clat percentiles (usec): 01:19:33.581 | 1.00th=[ 5669], 5.00th=[ 9503], 10.00th=[10028], 20.00th=[10159], 01:19:33.581 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10683], 60.00th=[10814], 01:19:33.581 | 70.00th=[10945], 80.00th=[11076], 90.00th=[11207], 95.00th=[11469], 01:19:33.581 | 99.00th=[14484], 99.50th=[14615], 99.90th=[14746], 99.95th=[14877], 01:19:33.581 | 99.99th=[14877] 01:19:33.581 bw ( KiB/s): min=24526, max=24526, per=34.43%, avg=24526.00, stdev= 0.00, samples=1 01:19:33.581 iops : min= 6131, max= 6131, avg=6131.00, stdev= 0.00, samples=1 01:19:33.581 lat (usec) : 500=0.01% 01:19:33.581 lat (msec) : 2=0.03%, 4=0.24%, 10=7.11%, 20=92.61% 01:19:33.581 cpu : usr=6.09%, sys=22.68%, ctx=438, majf=0, minf=1 01:19:33.581 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 01:19:33.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:19:33.581 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:19:33.581 issued rwts: total=5632,6113,0,0 short=0,0,0,0 dropped=0,0,0,0 01:19:33.581 latency : target=0, window=0, percentile=100.00%, depth=128 01:19:33.581 job2: (groupid=0, jobs=1): err= 0: pid=66582: Mon Dec 9 05:14:15 2024 01:19:33.581 read: IOPS=2554, BW=9.98MiB/s (10.5MB/s)(10.0MiB/1002msec) 01:19:33.581 slat (usec): min=7, max=8899, avg=196.90, stdev=867.36 01:19:33.581 clat (usec): min=12877, max=60089, avg=24758.49, stdev=7185.08 01:19:33.581 lat (usec): min=12918, max=60120, avg=24955.39, stdev=7250.90 01:19:33.581 clat percentiles (usec): 01:19:33.581 | 1.00th=[14484], 5.00th=[16057], 10.00th=[16909], 20.00th=[17695], 01:19:33.581 | 30.00th=[20841], 40.00th=[23200], 50.00th=[23725], 60.00th=[24249], 01:19:33.581 | 70.00th=[26870], 80.00th=[30802], 90.00th=[34866], 95.00th=[35914], 01:19:33.581 | 99.00th=[51643], 99.50th=[60031], 99.90th=[60031], 99.95th=[60031], 01:19:33.581 | 99.99th=[60031] 01:19:33.581 write: IOPS=2825, BW=11.0MiB/s (11.6MB/s)(11.1MiB/1002msec); 0 zone resets 01:19:33.581 slat (usec): min=19, max=7198, avg=162.26, stdev=680.18 01:19:33.581 clat (usec): min=1771, max=63833, avg=22260.11, stdev=13237.99 01:19:33.581 lat (usec): min=1809, max=63877, avg=22422.37, stdev=13331.88 01:19:33.581 clat percentiles (usec): 01:19:33.581 | 1.00th=[ 3032], 5.00th=[11076], 10.00th=[12911], 20.00th=[14877], 01:19:33.581 | 30.00th=[15401], 40.00th=[15795], 50.00th=[16909], 60.00th=[19268], 01:19:33.581 | 70.00th=[20055], 80.00th=[27395], 90.00th=[45351], 95.00th=[54264], 01:19:33.581 | 99.00th=[62129], 99.50th=[63177], 99.90th=[63701], 99.95th=[63701], 01:19:33.581 | 99.99th=[63701] 01:19:33.581 bw ( KiB/s): min= 9344, max=12288, per=15.18%, avg=10816.00, stdev=2081.72, samples=2 01:19:33.581 iops : min= 2336, max= 3072, avg=2704.00, stdev=520.43, samples=2 01:19:33.581 lat (msec) : 2=0.09%, 4=0.83%, 10=1.11%, 20=47.08%, 50=46.50% 01:19:33.581 lat (msec) : 100=4.38% 01:19:33.581 cpu : usr=3.20%, sys=15.88%, ctx=213, majf=0, minf=3 01:19:33.581 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 01:19:33.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:19:33.581 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:19:33.581 issued rwts: total=2560,2831,0,0 short=0,0,0,0 dropped=0,0,0,0 01:19:33.582 latency : target=0, window=0, percentile=100.00%, depth=128 01:19:33.582 job3: (groupid=0, jobs=1): err= 0: pid=66583: Mon Dec 9 05:14:15 2024 01:19:33.582 read: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec) 01:19:33.582 slat (usec): min=4, max=8922, avg=190.08, stdev=911.98 01:19:33.582 clat (usec): min=12267, max=57127, avg=25456.47, stdev=10030.80 01:19:33.582 lat (usec): min=12321, max=57150, avg=25646.55, stdev=10120.03 01:19:33.582 clat percentiles (usec): 01:19:33.582 | 1.00th=[13304], 5.00th=[14746], 10.00th=[15533], 20.00th=[16319], 01:19:33.582 | 30.00th=[16909], 40.00th=[22676], 50.00th=[23725], 60.00th=[24511], 01:19:33.582 | 70.00th=[26870], 80.00th=[34866], 90.00th=[43254], 95.00th=[45876], 01:19:33.582 | 99.00th=[49546], 99.50th=[53740], 99.90th=[56886], 99.95th=[56886], 01:19:33.582 | 99.99th=[56886] 01:19:33.582 write: IOPS=2827, BW=11.0MiB/s (11.6MB/s)(11.1MiB/1005msec); 0 zone resets 01:19:33.582 slat (usec): min=22, max=8902, avg=172.02, stdev=834.47 01:19:33.582 clat (usec): min=2421, max=56810, avg=21461.50, stdev=8966.16 01:19:33.582 lat (usec): min=6290, max=56843, avg=21633.52, stdev=9047.85 01:19:33.582 clat percentiles (usec): 01:19:33.582 | 1.00th=[12387], 5.00th=[12911], 10.00th=[13042], 20.00th=[14877], 01:19:33.582 | 30.00th=[17171], 40.00th=[17433], 50.00th=[17957], 60.00th=[19268], 01:19:33.582 | 70.00th=[23200], 80.00th=[28705], 90.00th=[29754], 95.00th=[44827], 01:19:33.582 | 99.00th=[51119], 99.50th=[53740], 99.90th=[56886], 99.95th=[56886], 01:19:33.582 | 99.99th=[56886] 01:19:33.582 bw ( KiB/s): min=10552, max=11160, per=15.24%, avg=10856.00, stdev=429.92, samples=2 01:19:33.582 iops : min= 2638, max= 2790, avg=2714.00, stdev=107.48, samples=2 01:19:33.582 lat (msec) : 4=0.02%, 10=0.15%, 20=50.83%, 50=48.06%, 100=0.94% 01:19:33.582 cpu : usr=2.69%, sys=10.46%, ctx=214, majf=0, minf=11 01:19:33.582 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 01:19:33.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:19:33.582 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:19:33.582 issued rwts: total=2560,2842,0,0 short=0,0,0,0 dropped=0,0,0,0 01:19:33.582 latency : target=0, window=0, percentile=100.00%, depth=128 01:19:33.582 01:19:33.582 Run status group 0 (all jobs): 01:19:33.582 READ: bw=63.7MiB/s (66.8MB/s), 9.95MiB/s-22.0MiB/s (10.4MB/s-23.0MB/s), io=64.0MiB (67.1MB), run=1001-1005msec 01:19:33.582 WRITE: bw=69.6MiB/s (73.0MB/s), 11.0MiB/s-23.9MiB/s (11.6MB/s-25.0MB/s), io=69.9MiB (73.3MB), run=1001-1005msec 01:19:33.582 01:19:33.582 Disk stats (read/write): 01:19:33.582 nvme0n1: ios=5170/5120, merge=0/0, ticks=12073/10346, in_queue=22419, util=89.98% 01:19:33.582 nvme0n2: ios=5169/5120, merge=0/0, ticks=12067/10455, in_queue=22522, util=89.53% 01:19:33.582 nvme0n3: ios=2069/2276, merge=0/0, ticks=27069/23363, in_queue=50432, util=90.02% 01:19:33.582 nvme0n4: ios=2518/2560, merge=0/0, ticks=19817/13684, in_queue=33501, util=90.40% 01:19:33.582 05:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 01:19:33.582 [global] 01:19:33.582 thread=1 01:19:33.582 invalidate=1 01:19:33.582 rw=randwrite 01:19:33.582 time_based=1 01:19:33.582 runtime=1 01:19:33.582 ioengine=libaio 01:19:33.582 direct=1 01:19:33.582 bs=4096 01:19:33.582 iodepth=128 01:19:33.582 norandommap=0 01:19:33.582 numjobs=1 01:19:33.582 01:19:33.582 verify_dump=1 01:19:33.582 verify_backlog=512 01:19:33.582 verify_state_save=0 01:19:33.582 do_verify=1 01:19:33.582 verify=crc32c-intel 01:19:33.582 [job0] 01:19:33.582 filename=/dev/nvme0n1 01:19:33.582 [job1] 01:19:33.582 filename=/dev/nvme0n2 01:19:33.582 [job2] 01:19:33.582 filename=/dev/nvme0n3 01:19:33.582 [job3] 01:19:33.582 filename=/dev/nvme0n4 01:19:33.582 Could not set queue depth (nvme0n1) 01:19:33.582 Could not set queue depth (nvme0n2) 01:19:33.582 Could not set queue depth (nvme0n3) 01:19:33.582 Could not set queue depth (nvme0n4) 01:19:33.582 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:19:33.582 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:19:33.582 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:19:33.582 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:19:33.582 fio-3.35 01:19:33.582 Starting 4 threads 01:19:34.963 01:19:34.963 job0: (groupid=0, jobs=1): err= 0: pid=66638: Mon Dec 9 05:14:17 2024 01:19:34.963 read: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec) 01:19:34.963 slat (usec): min=11, max=6326, avg=89.55, stdev=518.14 01:19:34.963 clat (usec): min=8021, max=21038, avg=12715.04, stdev=1383.88 01:19:34.963 lat (usec): min=8039, max=24631, avg=12804.59, stdev=1410.91 01:19:34.963 clat percentiles (usec): 01:19:34.963 | 1.00th=[ 8356], 5.00th=[11338], 10.00th=[11863], 20.00th=[12256], 01:19:34.963 | 30.00th=[12387], 40.00th=[12518], 50.00th=[12649], 60.00th=[12649], 01:19:34.963 | 70.00th=[13042], 80.00th=[13435], 90.00th=[13698], 95.00th=[14091], 01:19:34.963 | 99.00th=[19268], 99.50th=[20055], 99.90th=[21103], 99.95th=[21103], 01:19:34.963 | 99.99th=[21103] 01:19:34.963 write: IOPS=5352, BW=20.9MiB/s (21.9MB/s)(21.0MiB/1004msec); 0 zone resets 01:19:34.963 slat (usec): min=4, max=8925, avg=91.80, stdev=484.99 01:19:34.963 clat (usec): min=703, max=15995, avg=11512.29, stdev=1394.26 01:19:34.963 lat (usec): min=3914, max=18144, avg=11604.09, stdev=1333.62 01:19:34.963 clat percentiles (usec): 01:19:34.963 | 1.00th=[ 5800], 5.00th=[10028], 10.00th=[10290], 20.00th=[10683], 01:19:34.963 | 30.00th=[10945], 40.00th=[11338], 50.00th=[11469], 60.00th=[11731], 01:19:34.963 | 70.00th=[11994], 80.00th=[12256], 90.00th=[12780], 95.00th=[13304], 01:19:34.963 | 99.00th=[15664], 99.50th=[15926], 99.90th=[15926], 99.95th=[16057], 01:19:34.963 | 99.99th=[16057] 01:19:34.963 bw ( KiB/s): min=20439, max=21488, per=26.37%, avg=20963.50, stdev=741.76, samples=2 01:19:34.963 iops : min= 5109, max= 5372, avg=5240.50, stdev=185.97, samples=2 01:19:34.963 lat (usec) : 750=0.01% 01:19:34.963 lat (msec) : 4=0.03%, 10=4.46%, 20=95.27%, 50=0.23% 01:19:34.963 cpu : usr=5.18%, sys=19.14%, ctx=224, majf=0, minf=15 01:19:34.963 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 01:19:34.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:19:34.963 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:19:34.963 issued rwts: total=5120,5374,0,0 short=0,0,0,0 dropped=0,0,0,0 01:19:34.963 latency : target=0, window=0, percentile=100.00%, depth=128 01:19:34.963 job1: (groupid=0, jobs=1): err= 0: pid=66639: Mon Dec 9 05:14:17 2024 01:19:34.963 read: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec) 01:19:34.963 slat (usec): min=7, max=6095, avg=89.37, stdev=515.70 01:19:34.963 clat (usec): min=7832, max=20453, avg=12678.26, stdev=1397.51 01:19:34.963 lat (usec): min=7851, max=23841, avg=12767.63, stdev=1423.25 01:19:34.963 clat percentiles (usec): 01:19:34.963 | 1.00th=[ 8455], 5.00th=[11207], 10.00th=[11731], 20.00th=[11994], 01:19:34.963 | 30.00th=[12125], 40.00th=[12387], 50.00th=[12649], 60.00th=[12780], 01:19:34.963 | 70.00th=[13042], 80.00th=[13304], 90.00th=[13960], 95.00th=[14222], 01:19:34.963 | 99.00th=[19530], 99.50th=[19792], 99.90th=[20317], 99.95th=[20317], 01:19:34.963 | 99.99th=[20579] 01:19:34.963 write: IOPS=5422, BW=21.2MiB/s (22.2MB/s)(21.2MiB/1003msec); 0 zone resets 01:19:34.963 slat (usec): min=21, max=7386, avg=90.42, stdev=446.91 01:19:34.963 clat (usec): min=382, max=15857, avg=11414.59, stdev=1240.27 01:19:34.963 lat (usec): min=3605, max=15890, avg=11505.00, stdev=1175.33 01:19:34.963 clat percentiles (usec): 01:19:34.963 | 1.00th=[ 5473], 5.00th=[ 9896], 10.00th=[10421], 20.00th=[10945], 01:19:34.963 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11469], 60.00th=[11731], 01:19:34.963 | 70.00th=[11863], 80.00th=[12125], 90.00th=[12518], 95.00th=[12780], 01:19:34.963 | 99.00th=[15664], 99.50th=[15795], 99.90th=[15795], 99.95th=[15795], 01:19:34.963 | 99.99th=[15795] 01:19:34.963 bw ( KiB/s): min=20536, max=21952, per=26.72%, avg=21244.00, stdev=1001.26, samples=2 01:19:34.963 iops : min= 5134, max= 5488, avg=5311.00, stdev=250.32, samples=2 01:19:34.963 lat (usec) : 500=0.01% 01:19:34.963 lat (msec) : 4=0.11%, 10=4.74%, 20=94.98%, 50=0.16% 01:19:34.963 cpu : usr=5.49%, sys=20.36%, ctx=225, majf=0, minf=17 01:19:34.963 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 01:19:34.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:19:34.963 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:19:34.963 issued rwts: total=5120,5439,0,0 short=0,0,0,0 dropped=0,0,0,0 01:19:34.963 latency : target=0, window=0, percentile=100.00%, depth=128 01:19:34.963 job2: (groupid=0, jobs=1): err= 0: pid=66640: Mon Dec 9 05:14:17 2024 01:19:34.963 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 01:19:34.963 slat (usec): min=7, max=7030, avg=106.33, stdev=575.39 01:19:34.963 clat (usec): min=2494, max=26512, avg=14425.55, stdev=1980.70 01:19:34.963 lat (usec): min=2512, max=27765, avg=14531.88, stdev=1973.12 01:19:34.963 clat percentiles (usec): 01:19:34.963 | 1.00th=[ 9241], 5.00th=[11207], 10.00th=[12649], 20.00th=[13698], 01:19:34.963 | 30.00th=[14091], 40.00th=[14222], 50.00th=[14353], 60.00th=[14484], 01:19:34.963 | 70.00th=[14877], 80.00th=[15401], 90.00th=[16057], 95.00th=[16319], 01:19:34.963 | 99.00th=[21890], 99.50th=[24249], 99.90th=[25035], 99.95th=[26346], 01:19:34.963 | 99.99th=[26608] 01:19:34.963 write: IOPS=4599, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 01:19:34.963 slat (usec): min=20, max=8953, avg=100.32, stdev=533.18 01:19:34.963 clat (usec): min=2247, max=18153, avg=13125.61, stdev=1288.38 01:19:34.963 lat (usec): min=2309, max=20462, avg=13225.93, stdev=1210.62 01:19:34.963 clat percentiles (usec): 01:19:34.963 | 1.00th=[ 9241], 5.00th=[11469], 10.00th=[11863], 20.00th=[12256], 01:19:34.963 | 30.00th=[12518], 40.00th=[12911], 50.00th=[13304], 60.00th=[13435], 01:19:34.963 | 70.00th=[13566], 80.00th=[13829], 90.00th=[14222], 95.00th=[14746], 01:19:34.963 | 99.00th=[17957], 99.50th=[17957], 99.90th=[18220], 99.95th=[18220], 01:19:34.963 | 99.99th=[18220] 01:19:34.963 bw ( KiB/s): min=17208, max=19656, per=23.18%, avg=18432.00, stdev=1731.00, samples=2 01:19:34.963 iops : min= 4302, max= 4914, avg=4608.00, stdev=432.75, samples=2 01:19:34.963 lat (msec) : 4=0.14%, 10=1.88%, 20=97.09%, 50=0.89% 01:19:34.963 cpu : usr=5.19%, sys=17.66%, ctx=210, majf=0, minf=5 01:19:34.963 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 01:19:34.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:19:34.963 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:19:34.963 issued rwts: total=4608,4613,0,0 short=0,0,0,0 dropped=0,0,0,0 01:19:34.963 latency : target=0, window=0, percentile=100.00%, depth=128 01:19:34.963 job3: (groupid=0, jobs=1): err= 0: pid=66641: Mon Dec 9 05:14:17 2024 01:19:34.963 read: IOPS=4418, BW=17.3MiB/s (18.1MB/s)(17.4MiB/1008msec) 01:19:34.963 slat (usec): min=7, max=3345, avg=106.24, stdev=442.68 01:19:34.963 clat (usec): min=1005, max=16901, avg=14111.20, stdev=1350.29 01:19:34.963 lat (usec): min=1022, max=16951, avg=14217.44, stdev=1285.95 01:19:34.963 clat percentiles (usec): 01:19:34.963 | 1.00th=[ 7570], 5.00th=[12256], 10.00th=[13566], 20.00th=[13829], 01:19:34.963 | 30.00th=[13960], 40.00th=[14091], 50.00th=[14222], 60.00th=[14353], 01:19:34.964 | 70.00th=[14484], 80.00th=[14746], 90.00th=[15139], 95.00th=[15533], 01:19:34.964 | 99.00th=[15795], 99.50th=[15926], 99.90th=[15926], 99.95th=[16057], 01:19:34.964 | 99.99th=[16909] 01:19:34.964 write: IOPS=4571, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1008msec); 0 zone resets 01:19:34.964 slat (usec): min=21, max=2830, avg=104.03, stdev=393.69 01:19:34.964 clat (usec): min=10387, max=16840, avg=13797.95, stdev=700.24 01:19:34.964 lat (usec): min=10443, max=17080, avg=13901.98, stdev=619.82 01:19:34.964 clat percentiles (usec): 01:19:34.964 | 1.00th=[11338], 5.00th=[12649], 10.00th=[13042], 20.00th=[13304], 01:19:34.964 | 30.00th=[13566], 40.00th=[13698], 50.00th=[13829], 60.00th=[14091], 01:19:34.964 | 70.00th=[14222], 80.00th=[14353], 90.00th=[14615], 95.00th=[14746], 01:19:34.964 | 99.00th=[15270], 99.50th=[15401], 99.90th=[15795], 99.95th=[16057], 01:19:34.964 | 99.99th=[16909] 01:19:34.964 bw ( KiB/s): min=18312, max=18589, per=23.21%, avg=18450.50, stdev=195.87, samples=2 01:19:34.964 iops : min= 4578, max= 4647, avg=4612.50, stdev=48.79, samples=2 01:19:34.964 lat (msec) : 2=0.07%, 4=0.10%, 10=0.61%, 20=99.23% 01:19:34.964 cpu : usr=4.87%, sys=18.37%, ctx=413, majf=0, minf=13 01:19:34.964 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 01:19:34.964 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:19:34.964 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:19:34.964 issued rwts: total=4454,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 01:19:34.964 latency : target=0, window=0, percentile=100.00%, depth=128 01:19:34.964 01:19:34.964 Run status group 0 (all jobs): 01:19:34.964 READ: bw=74.8MiB/s (78.4MB/s), 17.3MiB/s-19.9MiB/s (18.1MB/s-20.9MB/s), io=75.4MiB (79.1MB), run=1003-1008msec 01:19:34.964 WRITE: bw=77.6MiB/s (81.4MB/s), 17.9MiB/s-21.2MiB/s (18.7MB/s-22.2MB/s), io=78.3MiB (82.1MB), run=1003-1008msec 01:19:34.964 01:19:34.964 Disk stats (read/write): 01:19:34.964 nvme0n1: ios=4523/4608, merge=0/0, ticks=52745/47798, in_queue=100543, util=89.68% 01:19:34.964 nvme0n2: ios=4609/4608, merge=0/0, ticks=53343/47172, in_queue=100515, util=89.54% 01:19:34.964 nvme0n3: ios=3911/4096, merge=0/0, ticks=52638/48544, in_queue=101182, util=91.06% 01:19:34.964 nvme0n4: ios=3852/4096, merge=0/0, ticks=12325/11498, in_queue=23823, util=90.72% 01:19:34.964 05:14:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 01:19:34.964 05:14:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=66658 01:19:34.964 05:14:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 01:19:34.964 05:14:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 01:19:34.964 [global] 01:19:34.964 thread=1 01:19:34.964 invalidate=1 01:19:34.964 rw=read 01:19:34.964 time_based=1 01:19:34.964 runtime=10 01:19:34.964 ioengine=libaio 01:19:34.964 direct=1 01:19:34.964 bs=4096 01:19:34.964 iodepth=1 01:19:34.964 norandommap=1 01:19:34.964 numjobs=1 01:19:34.964 01:19:34.964 [job0] 01:19:34.964 filename=/dev/nvme0n1 01:19:34.964 [job1] 01:19:34.964 filename=/dev/nvme0n2 01:19:34.964 [job2] 01:19:34.964 filename=/dev/nvme0n3 01:19:34.964 [job3] 01:19:34.964 filename=/dev/nvme0n4 01:19:34.964 Could not set queue depth (nvme0n1) 01:19:34.964 Could not set queue depth (nvme0n2) 01:19:34.964 Could not set queue depth (nvme0n3) 01:19:34.964 Could not set queue depth (nvme0n4) 01:19:35.222 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:19:35.222 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:19:35.222 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:19:35.222 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:19:35.222 fio-3.35 01:19:35.222 Starting 4 threads 01:19:38.494 05:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 01:19:38.494 fio: pid=66704, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 01:19:38.494 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=49700864, buflen=4096 01:19:38.494 05:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 01:19:38.494 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=75931648, buflen=4096 01:19:38.494 fio: pid=66702, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 01:19:38.494 05:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 01:19:38.494 05:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 01:19:38.494 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=60235776, buflen=4096 01:19:38.494 fio: pid=66699, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 01:19:38.751 05:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 01:19:38.751 05:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 01:19:39.009 fio: pid=66700, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 01:19:39.009 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=28127232, buflen=4096 01:19:39.009 01:19:39.009 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66699: Mon Dec 9 05:14:21 2024 01:19:39.009 read: IOPS=4388, BW=17.1MiB/s (18.0MB/s)(57.4MiB/3351msec) 01:19:39.009 slat (usec): min=4, max=14581, avg=10.61, stdev=189.34 01:19:39.009 clat (usec): min=111, max=3480, avg=216.50, stdev=52.11 01:19:39.009 lat (usec): min=125, max=14794, avg=227.12, stdev=196.35 01:19:39.009 clat percentiles (usec): 01:19:39.009 | 1.00th=[ 139], 5.00th=[ 149], 10.00th=[ 157], 20.00th=[ 196], 01:19:39.009 | 30.00th=[ 208], 40.00th=[ 217], 50.00th=[ 223], 60.00th=[ 229], 01:19:39.009 | 70.00th=[ 235], 80.00th=[ 241], 90.00th=[ 251], 95.00th=[ 258], 01:19:39.009 | 99.00th=[ 277], 99.50th=[ 289], 99.90th=[ 424], 99.95th=[ 906], 01:19:39.009 | 99.99th=[ 2376] 01:19:39.009 bw ( KiB/s): min=16582, max=18050, per=22.65%, avg=17150.67, stdev=538.94, samples=6 01:19:39.009 iops : min= 4145, max= 4512, avg=4287.33, stdev=134.72, samples=6 01:19:39.009 lat (usec) : 250=89.89%, 500=10.01%, 750=0.03%, 1000=0.01% 01:19:39.009 lat (msec) : 2=0.03%, 4=0.01% 01:19:39.009 cpu : usr=0.57%, sys=3.37%, ctx=14733, majf=0, minf=1 01:19:39.009 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:19:39.009 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:19:39.009 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:19:39.009 issued rwts: total=14707,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:19:39.009 latency : target=0, window=0, percentile=100.00%, depth=1 01:19:39.009 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66700: Mon Dec 9 05:14:21 2024 01:19:39.009 read: IOPS=6412, BW=25.0MiB/s (26.3MB/s)(90.8MiB/3626msec) 01:19:39.009 slat (usec): min=5, max=14838, avg=10.62, stdev=161.76 01:19:39.009 clat (usec): min=99, max=1815, avg=144.54, stdev=21.91 01:19:39.009 lat (usec): min=108, max=15129, avg=155.15, stdev=164.23 01:19:39.009 clat percentiles (usec): 01:19:39.009 | 1.00th=[ 119], 5.00th=[ 125], 10.00th=[ 129], 20.00th=[ 133], 01:19:39.009 | 30.00th=[ 137], 40.00th=[ 141], 50.00th=[ 143], 60.00th=[ 147], 01:19:39.009 | 70.00th=[ 151], 80.00th=[ 155], 90.00th=[ 161], 95.00th=[ 167], 01:19:39.009 | 99.00th=[ 182], 99.50th=[ 186], 99.90th=[ 247], 99.95th=[ 429], 01:19:39.009 | 99.99th=[ 1004] 01:19:39.009 bw ( KiB/s): min=24127, max=26658, per=34.26%, avg=25941.67, stdev=941.14, samples=6 01:19:39.009 iops : min= 6031, max= 6664, avg=6485.00, stdev=235.38, samples=6 01:19:39.009 lat (usec) : 100=0.01%, 250=99.89%, 500=0.06%, 750=0.02%, 1000=0.01% 01:19:39.009 lat (msec) : 2=0.01% 01:19:39.009 cpu : usr=0.88%, sys=4.69%, ctx=23264, majf=0, minf=1 01:19:39.009 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:19:39.009 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:19:39.009 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:19:39.009 issued rwts: total=23252,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:19:39.009 latency : target=0, window=0, percentile=100.00%, depth=1 01:19:39.009 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66702: Mon Dec 9 05:14:21 2024 01:19:39.009 read: IOPS=5980, BW=23.4MiB/s (24.5MB/s)(72.4MiB/3100msec) 01:19:39.009 slat (usec): min=5, max=7831, avg= 8.31, stdev=77.93 01:19:39.009 clat (usec): min=114, max=3285, avg=158.10, stdev=40.12 01:19:39.009 lat (usec): min=121, max=8036, avg=166.41, stdev=88.04 01:19:39.009 clat percentiles (usec): 01:19:39.009 | 1.00th=[ 129], 5.00th=[ 135], 10.00th=[ 139], 20.00th=[ 143], 01:19:39.009 | 30.00th=[ 147], 40.00th=[ 151], 50.00th=[ 155], 60.00th=[ 159], 01:19:39.009 | 70.00th=[ 163], 80.00th=[ 172], 90.00th=[ 182], 95.00th=[ 192], 01:19:39.009 | 99.00th=[ 212], 99.50th=[ 225], 99.90th=[ 424], 99.95th=[ 537], 01:19:39.009 | 99.99th=[ 2704] 01:19:39.009 bw ( KiB/s): min=23536, max=24678, per=31.78%, avg=24061.60, stdev=455.43, samples=5 01:19:39.009 iops : min= 5884, max= 6169, avg=6015.00, stdev=113.58, samples=5 01:19:39.009 lat (usec) : 250=99.74%, 500=0.19%, 750=0.04%, 1000=0.01% 01:19:39.009 lat (msec) : 2=0.01%, 4=0.01% 01:19:39.009 cpu : usr=0.77%, sys=4.39%, ctx=18544, majf=0, minf=2 01:19:39.009 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:19:39.009 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:19:39.009 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:19:39.009 issued rwts: total=18539,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:19:39.009 latency : target=0, window=0, percentile=100.00%, depth=1 01:19:39.010 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66704: Mon Dec 9 05:14:21 2024 01:19:39.010 read: IOPS=4234, BW=16.5MiB/s (17.3MB/s)(47.4MiB/2866msec) 01:19:39.010 slat (nsec): min=4278, max=93839, avg=6547.09, stdev=3637.73 01:19:39.010 clat (usec): min=162, max=1646, avg=228.78, stdev=29.85 01:19:39.010 lat (usec): min=187, max=1651, avg=235.33, stdev=30.33 01:19:39.010 clat percentiles (usec): 01:19:39.010 | 1.00th=[ 192], 5.00th=[ 200], 10.00th=[ 204], 20.00th=[ 212], 01:19:39.010 | 30.00th=[ 219], 40.00th=[ 223], 50.00th=[ 229], 60.00th=[ 233], 01:19:39.010 | 70.00th=[ 237], 80.00th=[ 243], 90.00th=[ 253], 95.00th=[ 262], 01:19:39.010 | 99.00th=[ 277], 99.50th=[ 293], 99.90th=[ 338], 99.95th=[ 375], 01:19:39.010 | 99.99th=[ 1385] 01:19:39.010 bw ( KiB/s): min=16582, max=17389, per=22.42%, avg=16970.80, stdev=347.04, samples=5 01:19:39.010 iops : min= 4145, max= 4347, avg=4242.40, stdev=86.85, samples=5 01:19:39.010 lat (usec) : 250=87.79%, 500=12.16%, 1000=0.01% 01:19:39.010 lat (msec) : 2=0.03% 01:19:39.010 cpu : usr=0.49%, sys=2.83%, ctx=12149, majf=0, minf=2 01:19:39.010 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:19:39.010 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:19:39.010 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:19:39.010 issued rwts: total=12135,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:19:39.010 latency : target=0, window=0, percentile=100.00%, depth=1 01:19:39.010 01:19:39.010 Run status group 0 (all jobs): 01:19:39.010 READ: bw=73.9MiB/s (77.5MB/s), 16.5MiB/s-25.0MiB/s (17.3MB/s-26.3MB/s), io=268MiB (281MB), run=2866-3626msec 01:19:39.010 01:19:39.010 Disk stats (read/write): 01:19:39.010 nvme0n1: ios=13458/0, merge=0/0, ticks=2928/0, in_queue=2928, util=95.28% 01:19:39.010 nvme0n2: ios=21592/0, merge=0/0, ticks=3156/0, in_queue=3156, util=95.42% 01:19:39.010 nvme0n3: ios=17403/0, merge=0/0, ticks=2758/0, in_queue=2758, util=96.78% 01:19:39.010 nvme0n4: ios=11165/0, merge=0/0, ticks=2463/0, in_queue=2463, util=96.55% 01:19:39.010 05:14:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 01:19:39.010 05:14:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 01:19:39.268 05:14:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 01:19:39.268 05:14:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 01:19:39.525 05:14:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 01:19:39.525 05:14:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 01:19:39.783 05:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 01:19:39.783 05:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 01:19:40.040 05:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 01:19:40.040 05:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 01:19:40.299 05:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 01:19:40.299 05:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 66658 01:19:40.299 05:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 01:19:40.299 05:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 01:19:40.299 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:19:40.299 05:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 01:19:40.299 05:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 01:19:40.299 05:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 01:19:40.299 05:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 01:19:40.299 05:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 01:19:40.299 05:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 01:19:40.299 nvmf hotplug test: fio failed as expected 01:19:40.299 05:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 01:19:40.299 05:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 01:19:40.299 05:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 01:19:40.299 05:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:19:40.559 05:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 01:19:40.559 05:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 01:19:40.559 05:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 01:19:40.559 05:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 01:19:40.559 05:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 01:19:40.559 05:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 01:19:40.559 05:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 01:19:40.559 05:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:19:40.559 05:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 01:19:40.559 05:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 01:19:40.559 05:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:19:40.559 rmmod nvme_tcp 01:19:40.559 rmmod nvme_fabrics 01:19:40.559 rmmod nvme_keyring 01:19:40.559 05:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:19:40.559 05:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 01:19:40.559 05:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 01:19:40.559 05:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 66278 ']' 01:19:40.559 05:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 66278 01:19:40.559 05:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 66278 ']' 01:19:40.559 05:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 66278 01:19:40.559 05:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 01:19:40.559 05:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:19:40.559 05:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66278 01:19:40.559 killing process with pid 66278 01:19:40.559 05:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:19:40.559 05:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:19:40.559 05:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66278' 01:19:40.559 05:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 66278 01:19:40.559 05:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 66278 01:19:40.818 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:19:40.818 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:19:40.818 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:19:40.818 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 01:19:40.818 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 01:19:40.819 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:19:40.819 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 01:19:40.819 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:19:40.819 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:19:40.819 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:19:40.819 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:19:40.819 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:19:40.819 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:19:41.078 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:19:41.078 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:19:41.078 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:19:41.078 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:19:41.078 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:19:41.078 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:19:41.078 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:19:41.078 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:19:41.078 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:19:41.078 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 01:19:41.078 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:19:41.078 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:19:41.078 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:19:41.078 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 01:19:41.078 ************************************ 01:19:41.078 END TEST nvmf_fio_target 01:19:41.078 ************************************ 01:19:41.078 01:19:41.078 real 0m19.555s 01:19:41.078 user 1m14.177s 01:19:41.078 sys 0m9.266s 01:19:41.078 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 01:19:41.078 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 01:19:41.338 05:14:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 01:19:41.338 05:14:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:19:41.338 05:14:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 01:19:41.338 05:14:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 01:19:41.338 ************************************ 01:19:41.338 START TEST nvmf_bdevio 01:19:41.338 ************************************ 01:19:41.338 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 01:19:41.338 * Looking for test storage... 01:19:41.338 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:19:41.338 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:19:41.338 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 01:19:41.338 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:19:41.338 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:19:41.338 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:19:41.338 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 01:19:41.338 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 01:19:41.338 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 01:19:41.338 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 01:19:41.338 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 01:19:41.339 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 01:19:41.339 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 01:19:41.339 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 01:19:41.339 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 01:19:41.339 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:19:41.339 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 01:19:41.339 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 01:19:41.339 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 01:19:41.339 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:19:41.339 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 01:19:41.339 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 01:19:41.339 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:19:41.339 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 01:19:41.339 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 01:19:41.339 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 01:19:41.339 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 01:19:41.339 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:19:41.339 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 01:19:41.339 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 01:19:41.339 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:19:41.339 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:19:41.339 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 01:19:41.339 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:19:41.339 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:19:41.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:19:41.339 --rc genhtml_branch_coverage=1 01:19:41.339 --rc genhtml_function_coverage=1 01:19:41.339 --rc genhtml_legend=1 01:19:41.339 --rc geninfo_all_blocks=1 01:19:41.339 --rc geninfo_unexecuted_blocks=1 01:19:41.339 01:19:41.339 ' 01:19:41.339 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:19:41.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:19:41.339 --rc genhtml_branch_coverage=1 01:19:41.339 --rc genhtml_function_coverage=1 01:19:41.339 --rc genhtml_legend=1 01:19:41.339 --rc geninfo_all_blocks=1 01:19:41.339 --rc geninfo_unexecuted_blocks=1 01:19:41.339 01:19:41.339 ' 01:19:41.339 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:19:41.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:19:41.339 --rc genhtml_branch_coverage=1 01:19:41.339 --rc genhtml_function_coverage=1 01:19:41.339 --rc genhtml_legend=1 01:19:41.339 --rc geninfo_all_blocks=1 01:19:41.339 --rc geninfo_unexecuted_blocks=1 01:19:41.339 01:19:41.339 ' 01:19:41.339 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:19:41.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:19:41.339 --rc genhtml_branch_coverage=1 01:19:41.339 --rc genhtml_function_coverage=1 01:19:41.339 --rc genhtml_legend=1 01:19:41.339 --rc geninfo_all_blocks=1 01:19:41.339 --rc geninfo_unexecuted_blocks=1 01:19:41.339 01:19:41.339 ' 01:19:41.339 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:19:41.339 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 01:19:41.339 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:19:41.339 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:19:41.339 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:19:41.339 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:19:41.339 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:19:41.339 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:19:41.339 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:19:41.339 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:19:41.339 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:19:41.339 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:19:41.599 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:19:41.599 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:19:41.599 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:19:41.599 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:19:41.599 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:19:41.599 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:19:41.599 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:19:41.599 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 01:19:41.599 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:19:41.599 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:19:41.599 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:19:41.599 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:19:41.599 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:19:41.599 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:19:41.599 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 01:19:41.599 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:19:41.599 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 01:19:41.599 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:19:41.599 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:19:41.600 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:19:41.600 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:19:41.600 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:19:41.600 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:19:41.600 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:19:41.600 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:19:41.600 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:19:41.600 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 01:19:41.600 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 01:19:41.600 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:19:41.600 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 01:19:41.600 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:19:41.600 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:19:41.600 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 01:19:41.600 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 01:19:41.600 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 01:19:41.600 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:19:41.600 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:19:41.600 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:19:41.600 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:19:41.600 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:19:41.600 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:19:41.600 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:19:41.600 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:19:41.600 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 01:19:41.600 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:19:41.600 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:19:41.600 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:19:41.600 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:19:41.600 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:19:41.600 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:19:41.600 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:19:41.600 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:19:41.600 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:19:41.600 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:19:41.600 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:19:41.600 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:19:41.600 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:19:41.600 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:19:41.600 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:19:41.600 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:19:41.600 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:19:41.600 Cannot find device "nvmf_init_br" 01:19:41.600 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 01:19:41.600 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:19:41.600 Cannot find device "nvmf_init_br2" 01:19:41.600 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 01:19:41.600 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:19:41.600 Cannot find device "nvmf_tgt_br" 01:19:41.600 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 01:19:41.600 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:19:41.600 Cannot find device "nvmf_tgt_br2" 01:19:41.600 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 01:19:41.600 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:19:41.600 Cannot find device "nvmf_init_br" 01:19:41.600 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 01:19:41.600 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:19:41.600 Cannot find device "nvmf_init_br2" 01:19:41.600 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 01:19:41.600 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:19:41.600 Cannot find device "nvmf_tgt_br" 01:19:41.600 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 01:19:41.600 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:19:41.600 Cannot find device "nvmf_tgt_br2" 01:19:41.600 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 01:19:41.600 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:19:41.600 Cannot find device "nvmf_br" 01:19:41.600 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 01:19:41.600 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:19:41.600 Cannot find device "nvmf_init_if" 01:19:41.600 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 01:19:41.600 05:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:19:41.600 Cannot find device "nvmf_init_if2" 01:19:41.600 05:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 01:19:41.600 05:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:19:41.600 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:19:41.600 05:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 01:19:41.600 05:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:19:41.600 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:19:41.600 05:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 01:19:41.600 05:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:19:41.600 05:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:19:41.600 05:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:19:41.600 05:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:19:41.600 05:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:19:41.860 05:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:19:41.860 05:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:19:41.860 05:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:19:41.860 05:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:19:41.860 05:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:19:41.860 05:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:19:41.860 05:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:19:41.860 05:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:19:41.860 05:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:19:41.860 05:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:19:41.860 05:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:19:41.860 05:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:19:41.860 05:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:19:41.860 05:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:19:41.860 05:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:19:41.860 05:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:19:41.860 05:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:19:41.860 05:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:19:41.860 05:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:19:41.860 05:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:19:41.861 05:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:19:41.861 05:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:19:41.861 05:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:19:41.861 05:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:19:41.861 05:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:19:41.861 05:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:19:41.861 05:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:19:41.861 05:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:19:41.861 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:19:41.861 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.172 ms 01:19:41.861 01:19:41.861 --- 10.0.0.3 ping statistics --- 01:19:41.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:19:41.861 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 01:19:41.861 05:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:19:41.861 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:19:41.861 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.103 ms 01:19:41.861 01:19:41.861 --- 10.0.0.4 ping statistics --- 01:19:41.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:19:41.861 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 01:19:41.861 05:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:19:41.861 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:19:41.861 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 01:19:41.861 01:19:41.861 --- 10.0.0.1 ping statistics --- 01:19:41.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:19:41.861 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 01:19:41.861 05:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:19:41.861 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:19:41.861 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.114 ms 01:19:41.861 01:19:41.861 --- 10.0.0.2 ping statistics --- 01:19:41.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:19:41.861 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 01:19:41.861 05:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:19:41.861 05:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 01:19:41.861 05:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:19:41.861 05:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:19:41.861 05:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:19:41.861 05:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:19:41.861 05:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:19:41.861 05:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:19:41.861 05:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:19:42.120 05:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 01:19:42.120 05:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:19:42.120 05:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 01:19:42.120 05:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:19:42.120 05:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=67021 01:19:42.120 05:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 01:19:42.120 05:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 67021 01:19:42.120 05:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 67021 ']' 01:19:42.120 05:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:19:42.120 05:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 01:19:42.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:19:42.120 05:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:19:42.120 05:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 01:19:42.120 05:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:19:42.120 [2024-12-09 05:14:24.376535] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:19:42.121 [2024-12-09 05:14:24.376606] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:19:42.121 [2024-12-09 05:14:24.529837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:19:42.380 [2024-12-09 05:14:24.585988] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:19:42.380 [2024-12-09 05:14:24.586034] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:19:42.380 [2024-12-09 05:14:24.586040] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:19:42.380 [2024-12-09 05:14:24.586045] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:19:42.380 [2024-12-09 05:14:24.586049] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:19:42.380 [2024-12-09 05:14:24.586872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 01:19:42.380 [2024-12-09 05:14:24.587129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 01:19:42.380 [2024-12-09 05:14:24.587304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:19:42.380 [2024-12-09 05:14:24.587309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 01:19:42.380 [2024-12-09 05:14:24.630291] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:19:42.948 05:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:19:42.948 05:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 01:19:42.948 05:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:19:42.948 05:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 01:19:42.948 05:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:19:42.948 05:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:19:42.948 05:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:19:42.948 05:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 01:19:42.948 05:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:19:42.948 [2024-12-09 05:14:25.311331] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:19:42.948 05:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:19:42.948 05:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 01:19:42.948 05:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 01:19:42.948 05:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:19:42.948 Malloc0 01:19:42.948 05:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:19:42.948 05:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:19:42.948 05:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 01:19:42.948 05:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:19:42.948 05:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:19:42.948 05:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:19:42.948 05:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 01:19:42.949 05:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:19:42.949 05:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:19:42.949 05:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:19:42.949 05:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 01:19:42.949 05:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:19:42.949 [2024-12-09 05:14:25.383103] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:19:42.949 05:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:19:42.949 05:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 01:19:42.949 05:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 01:19:42.949 05:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 01:19:42.949 05:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 01:19:42.949 05:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:19:42.949 05:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:19:42.949 { 01:19:42.949 "params": { 01:19:42.949 "name": "Nvme$subsystem", 01:19:42.949 "trtype": "$TEST_TRANSPORT", 01:19:42.949 "traddr": "$NVMF_FIRST_TARGET_IP", 01:19:42.949 "adrfam": "ipv4", 01:19:42.949 "trsvcid": "$NVMF_PORT", 01:19:42.949 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:19:42.949 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:19:42.949 "hdgst": ${hdgst:-false}, 01:19:42.949 "ddgst": ${ddgst:-false} 01:19:42.949 }, 01:19:42.949 "method": "bdev_nvme_attach_controller" 01:19:42.949 } 01:19:42.949 EOF 01:19:42.949 )") 01:19:42.949 05:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 01:19:42.949 05:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 01:19:43.207 05:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 01:19:43.207 05:14:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:19:43.207 "params": { 01:19:43.207 "name": "Nvme1", 01:19:43.207 "trtype": "tcp", 01:19:43.208 "traddr": "10.0.0.3", 01:19:43.208 "adrfam": "ipv4", 01:19:43.208 "trsvcid": "4420", 01:19:43.208 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:19:43.208 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:19:43.208 "hdgst": false, 01:19:43.208 "ddgst": false 01:19:43.208 }, 01:19:43.208 "method": "bdev_nvme_attach_controller" 01:19:43.208 }' 01:19:43.208 [2024-12-09 05:14:25.440530] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:19:43.208 [2024-12-09 05:14:25.440582] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67057 ] 01:19:43.208 [2024-12-09 05:14:25.590954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 01:19:43.208 [2024-12-09 05:14:25.647849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:19:43.208 [2024-12-09 05:14:25.648046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:19:43.208 [2024-12-09 05:14:25.648048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:19:43.467 [2024-12-09 05:14:25.703123] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:19:43.467 I/O targets: 01:19:43.467 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 01:19:43.467 01:19:43.467 01:19:43.467 CUnit - A unit testing framework for C - Version 2.1-3 01:19:43.467 http://cunit.sourceforge.net/ 01:19:43.467 01:19:43.467 01:19:43.467 Suite: bdevio tests on: Nvme1n1 01:19:43.467 Test: blockdev write read block ...passed 01:19:43.467 Test: blockdev write zeroes read block ...passed 01:19:43.467 Test: blockdev write zeroes read no split ...passed 01:19:43.467 Test: blockdev write zeroes read split ...passed 01:19:43.467 Test: blockdev write zeroes read split partial ...passed 01:19:43.467 Test: blockdev reset ...[2024-12-09 05:14:25.850314] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 01:19:43.467 [2024-12-09 05:14:25.850473] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa00190 (9): Bad file descriptor 01:19:43.467 [2024-12-09 05:14:25.870955] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 01:19:43.467 passed 01:19:43.467 Test: blockdev write read 8 blocks ...passed 01:19:43.467 Test: blockdev write read size > 128k ...passed 01:19:43.467 Test: blockdev write read invalid size ...passed 01:19:43.467 Test: blockdev write read offset + nbytes == size of blockdev ...passed 01:19:43.467 Test: blockdev write read offset + nbytes > size of blockdev ...passed 01:19:43.467 Test: blockdev write read max offset ...passed 01:19:43.467 Test: blockdev write read 2 blocks on overlapped address offset ...passed 01:19:43.467 Test: blockdev writev readv 8 blocks ...passed 01:19:43.467 Test: blockdev writev readv 30 x 1block ...passed 01:19:43.467 Test: blockdev writev readv block ...passed 01:19:43.467 Test: blockdev writev readv size > 128k ...passed 01:19:43.467 Test: blockdev writev readv size > 128k in two iovs ...passed 01:19:43.467 Test: blockdev comparev and writev ...[2024-12-09 05:14:25.878496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:19:43.467 [2024-12-09 05:14:25.878586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:19:43.467 [2024-12-09 05:14:25.878605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:19:43.467 [2024-12-09 05:14:25.878614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:19:43.467 [2024-12-09 05:14:25.878984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:19:43.467 [2024-12-09 05:14:25.879008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:19:43.467 [2024-12-09 05:14:25.879021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:19:43.467 [2024-12-09 05:14:25.879029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:19:43.467 [2024-12-09 05:14:25.879359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:19:43.467 [2024-12-09 05:14:25.879374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:19:43.467 [2024-12-09 05:14:25.879389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:19:43.467 [2024-12-09 05:14:25.879397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:19:43.467 [2024-12-09 05:14:25.879708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:19:43.467 [2024-12-09 05:14:25.879727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:19:43.467 [2024-12-09 05:14:25.879739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:19:43.467 [2024-12-09 05:14:25.879747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:19:43.467 passed 01:19:43.467 Test: blockdev nvme passthru rw ...passed 01:19:43.467 Test: blockdev nvme passthru vendor specific ...passed 01:19:43.467 Test: blockdev nvme admin passthru ...[2024-12-09 05:14:25.880566] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 01:19:43.467 [2024-12-09 05:14:25.880592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:19:43.467 [2024-12-09 05:14:25.880673] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 01:19:43.467 [2024-12-09 05:14:25.880687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:19:43.467 [2024-12-09 05:14:25.880772] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 01:19:43.467 [2024-12-09 05:14:25.880786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:19:43.467 [2024-12-09 05:14:25.880879] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 01:19:43.467 [2024-12-09 05:14:25.880893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:19:43.467 passed 01:19:43.467 Test: blockdev copy ...passed 01:19:43.467 01:19:43.467 Run Summary: Type Total Ran Passed Failed Inactive 01:19:43.467 suites 1 1 n/a 0 0 01:19:43.467 tests 23 23 23 0 0 01:19:43.467 asserts 152 152 152 0 n/a 01:19:43.467 01:19:43.467 Elapsed time = 0.162 seconds 01:19:43.727 05:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:19:43.727 05:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 01:19:43.727 05:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:19:43.727 05:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:19:43.727 05:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 01:19:43.727 05:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 01:19:43.727 05:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 01:19:43.727 05:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 01:19:43.727 05:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:19:43.727 05:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 01:19:43.727 05:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 01:19:43.727 05:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:19:43.727 rmmod nvme_tcp 01:19:43.986 rmmod nvme_fabrics 01:19:43.986 rmmod nvme_keyring 01:19:43.986 05:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:19:43.986 05:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 01:19:43.986 05:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 01:19:43.986 05:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 67021 ']' 01:19:43.986 05:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 67021 01:19:43.986 05:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 67021 ']' 01:19:43.986 05:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 67021 01:19:43.986 05:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 01:19:43.986 05:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:19:43.986 05:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67021 01:19:43.986 05:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 01:19:43.986 05:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 01:19:43.986 05:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67021' 01:19:43.986 killing process with pid 67021 01:19:43.986 05:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 67021 01:19:43.986 05:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 67021 01:19:44.245 05:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:19:44.245 05:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:19:44.245 05:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:19:44.245 05:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 01:19:44.245 05:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 01:19:44.245 05:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:19:44.245 05:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 01:19:44.245 05:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:19:44.245 05:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:19:44.245 05:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:19:44.245 05:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:19:44.505 05:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:19:44.505 05:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:19:44.505 05:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:19:44.505 05:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:19:44.505 05:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:19:44.505 05:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:19:44.505 05:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:19:44.505 05:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:19:44.505 05:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:19:44.505 05:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:19:44.505 05:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:19:44.505 05:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 01:19:44.505 05:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:19:44.505 05:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:19:44.505 05:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:19:44.505 05:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 01:19:44.505 01:19:44.505 real 0m3.382s 01:19:44.505 user 0m9.515s 01:19:44.505 sys 0m0.974s 01:19:44.505 ************************************ 01:19:44.505 END TEST nvmf_bdevio 01:19:44.505 ************************************ 01:19:44.505 05:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 01:19:44.505 05:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:19:44.763 05:14:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 01:19:44.763 ************************************ 01:19:44.763 END TEST nvmf_target_core 01:19:44.763 ************************************ 01:19:44.763 01:19:44.763 real 2m35.757s 01:19:44.763 user 6m49.807s 01:19:44.763 sys 0m50.128s 01:19:44.763 05:14:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 01:19:44.763 05:14:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 01:19:44.763 05:14:27 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 01:19:44.763 05:14:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:19:44.763 05:14:27 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 01:19:44.763 05:14:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:19:44.763 ************************************ 01:19:44.763 START TEST nvmf_target_extra 01:19:44.763 ************************************ 01:19:44.763 05:14:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 01:19:44.763 * Looking for test storage... 01:19:44.763 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 01:19:44.763 05:14:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:19:44.763 05:14:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 01:19:44.763 05:14:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:19:45.023 05:14:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:19:45.023 05:14:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:19:45.023 05:14:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 01:19:45.023 05:14:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 01:19:45.023 05:14:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 01:19:45.023 05:14:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 01:19:45.023 05:14:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 01:19:45.023 05:14:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 01:19:45.023 05:14:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 01:19:45.023 05:14:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 01:19:45.023 05:14:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 01:19:45.023 05:14:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:19:45.023 05:14:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 01:19:45.023 05:14:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 01:19:45.023 05:14:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 01:19:45.023 05:14:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:19:45.023 05:14:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 01:19:45.023 05:14:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 01:19:45.023 05:14:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:19:45.023 05:14:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 01:19:45.023 05:14:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 01:19:45.023 05:14:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 01:19:45.023 05:14:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 01:19:45.023 05:14:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:19:45.023 05:14:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 01:19:45.023 05:14:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 01:19:45.023 05:14:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:19:45.023 05:14:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:19:45.023 05:14:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 01:19:45.023 05:14:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:19:45.023 05:14:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:19:45.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:19:45.023 --rc genhtml_branch_coverage=1 01:19:45.023 --rc genhtml_function_coverage=1 01:19:45.023 --rc genhtml_legend=1 01:19:45.023 --rc geninfo_all_blocks=1 01:19:45.023 --rc geninfo_unexecuted_blocks=1 01:19:45.023 01:19:45.023 ' 01:19:45.023 05:14:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:19:45.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:19:45.023 --rc genhtml_branch_coverage=1 01:19:45.023 --rc genhtml_function_coverage=1 01:19:45.023 --rc genhtml_legend=1 01:19:45.023 --rc geninfo_all_blocks=1 01:19:45.023 --rc geninfo_unexecuted_blocks=1 01:19:45.023 01:19:45.023 ' 01:19:45.023 05:14:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:19:45.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:19:45.023 --rc genhtml_branch_coverage=1 01:19:45.023 --rc genhtml_function_coverage=1 01:19:45.023 --rc genhtml_legend=1 01:19:45.023 --rc geninfo_all_blocks=1 01:19:45.023 --rc geninfo_unexecuted_blocks=1 01:19:45.023 01:19:45.023 ' 01:19:45.023 05:14:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:19:45.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:19:45.023 --rc genhtml_branch_coverage=1 01:19:45.023 --rc genhtml_function_coverage=1 01:19:45.023 --rc genhtml_legend=1 01:19:45.023 --rc geninfo_all_blocks=1 01:19:45.023 --rc geninfo_unexecuted_blocks=1 01:19:45.023 01:19:45.023 ' 01:19:45.023 05:14:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:19:45.023 05:14:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 01:19:45.023 05:14:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:19:45.023 05:14:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:19:45.023 05:14:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:19:45.023 05:14:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:19:45.023 05:14:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:19:45.023 05:14:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:19:45.023 05:14:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:19:45.023 05:14:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:19:45.023 05:14:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:19:45.023 05:14:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:19:45.023 05:14:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:19:45.023 05:14:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:19:45.023 05:14:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:19:45.023 05:14:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:19:45.023 05:14:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:19:45.023 05:14:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:19:45.023 05:14:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:19:45.023 05:14:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 01:19:45.023 05:14:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:19:45.023 05:14:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:19:45.023 05:14:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:19:45.023 05:14:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:19:45.023 05:14:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:19:45.023 05:14:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:19:45.023 05:14:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 01:19:45.023 05:14:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:19:45.023 05:14:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 01:19:45.023 05:14:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:19:45.023 05:14:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:19:45.023 05:14:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:19:45.023 05:14:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:19:45.023 05:14:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:19:45.023 05:14:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:19:45.023 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:19:45.023 05:14:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:19:45.023 05:14:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:19:45.023 05:14:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 01:19:45.023 05:14:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 01:19:45.023 05:14:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 01:19:45.023 05:14:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 01:19:45.023 05:14:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 01:19:45.023 05:14:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:19:45.023 05:14:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 01:19:45.023 05:14:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 01:19:45.023 ************************************ 01:19:45.023 START TEST nvmf_auth_target 01:19:45.023 ************************************ 01:19:45.023 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 01:19:45.023 * Looking for test storage... 01:19:45.023 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:19:45.023 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:19:45.284 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 01:19:45.284 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:19:45.284 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:19:45.284 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:19:45.284 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 01:19:45.284 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 01:19:45.284 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 01:19:45.284 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 01:19:45.284 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 01:19:45.284 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 01:19:45.284 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 01:19:45.284 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 01:19:45.284 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 01:19:45.284 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:19:45.284 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 01:19:45.284 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 01:19:45.284 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 01:19:45.284 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:19:45.284 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 01:19:45.284 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 01:19:45.284 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:19:45.284 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 01:19:45.284 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 01:19:45.284 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 01:19:45.284 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 01:19:45.284 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:19:45.284 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 01:19:45.284 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 01:19:45.284 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:19:45.284 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:19:45.284 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 01:19:45.284 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:19:45.284 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:19:45.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:19:45.284 --rc genhtml_branch_coverage=1 01:19:45.284 --rc genhtml_function_coverage=1 01:19:45.284 --rc genhtml_legend=1 01:19:45.284 --rc geninfo_all_blocks=1 01:19:45.284 --rc geninfo_unexecuted_blocks=1 01:19:45.284 01:19:45.284 ' 01:19:45.284 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:19:45.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:19:45.284 --rc genhtml_branch_coverage=1 01:19:45.284 --rc genhtml_function_coverage=1 01:19:45.284 --rc genhtml_legend=1 01:19:45.284 --rc geninfo_all_blocks=1 01:19:45.284 --rc geninfo_unexecuted_blocks=1 01:19:45.284 01:19:45.284 ' 01:19:45.284 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:19:45.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:19:45.284 --rc genhtml_branch_coverage=1 01:19:45.284 --rc genhtml_function_coverage=1 01:19:45.284 --rc genhtml_legend=1 01:19:45.284 --rc geninfo_all_blocks=1 01:19:45.284 --rc geninfo_unexecuted_blocks=1 01:19:45.284 01:19:45.284 ' 01:19:45.284 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:19:45.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:19:45.284 --rc genhtml_branch_coverage=1 01:19:45.284 --rc genhtml_function_coverage=1 01:19:45.284 --rc genhtml_legend=1 01:19:45.284 --rc geninfo_all_blocks=1 01:19:45.284 --rc geninfo_unexecuted_blocks=1 01:19:45.284 01:19:45.284 ' 01:19:45.284 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:19:45.284 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 01:19:45.284 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:19:45.284 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:19:45.284 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:19:45.284 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:19:45.284 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:19:45.284 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:19:45.284 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:19:45.284 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:19:45.284 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:19:45.284 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:19:45.284 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:19:45.284 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:19:45.284 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:19:45.284 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:19:45.284 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:19:45.284 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:19:45.284 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:19:45.284 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 01:19:45.284 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:19:45.284 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:19:45.285 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:19:45.285 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:19:45.285 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:19:45.285 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:19:45.285 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 01:19:45.285 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:19:45.285 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 01:19:45.285 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:19:45.285 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:19:45.285 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:19:45.285 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:19:45.285 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:19:45.285 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:19:45.285 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:19:45.285 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:19:45.285 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:19:45.285 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 01:19:45.285 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 01:19:45.285 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 01:19:45.285 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 01:19:45.285 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:19:45.285 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 01:19:45.285 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 01:19:45.285 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 01:19:45.285 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 01:19:45.285 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:19:45.285 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:19:45.285 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 01:19:45.285 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 01:19:45.285 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 01:19:45.285 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:19:45.285 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:19:45.285 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:19:45.285 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:19:45.285 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:19:45.285 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:19:45.285 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:19:45.285 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:19:45.285 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@460 -- # nvmf_veth_init 01:19:45.285 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:19:45.285 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:19:45.285 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:19:45.285 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:19:45.285 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:19:45.285 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:19:45.285 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:19:45.285 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:19:45.285 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:19:45.285 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:19:45.285 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:19:45.285 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:19:45.285 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:19:45.285 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:19:45.285 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:19:45.285 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:19:45.285 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:19:45.285 Cannot find device "nvmf_init_br" 01:19:45.285 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 01:19:45.285 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:19:45.285 Cannot find device "nvmf_init_br2" 01:19:45.285 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 01:19:45.285 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:19:45.285 Cannot find device "nvmf_tgt_br" 01:19:45.285 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 01:19:45.285 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:19:45.285 Cannot find device "nvmf_tgt_br2" 01:19:45.285 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 01:19:45.285 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:19:45.285 Cannot find device "nvmf_init_br" 01:19:45.285 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 01:19:45.285 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:19:45.545 Cannot find device "nvmf_init_br2" 01:19:45.545 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 01:19:45.545 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:19:45.545 Cannot find device "nvmf_tgt_br" 01:19:45.545 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 01:19:45.545 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:19:45.545 Cannot find device "nvmf_tgt_br2" 01:19:45.545 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 01:19:45.545 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:19:45.545 Cannot find device "nvmf_br" 01:19:45.545 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 01:19:45.545 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:19:45.545 Cannot find device "nvmf_init_if" 01:19:45.545 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 01:19:45.545 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:19:45.545 Cannot find device "nvmf_init_if2" 01:19:45.545 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 01:19:45.545 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:19:45.545 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:19:45.545 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 01:19:45.545 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:19:45.545 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:19:45.545 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 01:19:45.545 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:19:45.545 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:19:45.545 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:19:45.545 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:19:45.545 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:19:45.545 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:19:45.545 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:19:45.545 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:19:45.545 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:19:45.545 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:19:45.545 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:19:45.545 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:19:45.546 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:19:45.546 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:19:45.805 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:19:45.805 05:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:19:45.805 05:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:19:45.805 05:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:19:45.805 05:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:19:45.805 05:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:19:45.805 05:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:19:45.805 05:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:19:45.805 05:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:19:45.805 05:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:19:45.805 05:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:19:45.805 05:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:19:45.805 05:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:19:45.805 05:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:19:45.805 05:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:19:45.805 05:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:19:45.805 05:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:19:45.805 05:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:19:45.805 05:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:19:45.805 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:19:45.805 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.123 ms 01:19:45.805 01:19:45.805 --- 10.0.0.3 ping statistics --- 01:19:45.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:19:45.805 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 01:19:45.805 05:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:19:45.805 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:19:45.805 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.091 ms 01:19:45.805 01:19:45.805 --- 10.0.0.4 ping statistics --- 01:19:45.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:19:45.805 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 01:19:45.805 05:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:19:45.806 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:19:45.806 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 01:19:45.806 01:19:45.806 --- 10.0.0.1 ping statistics --- 01:19:45.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:19:45.806 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 01:19:45.806 05:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:19:45.806 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:19:45.806 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.107 ms 01:19:45.806 01:19:45.806 --- 10.0.0.2 ping statistics --- 01:19:45.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:19:45.806 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 01:19:45.806 05:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:19:45.806 05:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@461 -- # return 0 01:19:45.806 05:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:19:45.806 05:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:19:45.806 05:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:19:45.806 05:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:19:45.806 05:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:19:45.806 05:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:19:45.806 05:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:19:45.806 05:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 01:19:45.806 05:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:19:45.806 05:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 01:19:45.806 05:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:19:45.806 05:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=67351 01:19:45.806 05:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 01:19:45.806 05:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 67351 01:19:45.806 05:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 67351 ']' 01:19:45.806 05:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:19:45.806 05:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 01:19:45.806 05:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:19:45.806 05:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 01:19:45.806 05:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:19:46.742 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:19:46.742 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 01:19:46.742 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:19:46.742 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 01:19:46.742 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:19:47.001 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:19:47.001 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=67382 01:19:47.001 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 01:19:47.001 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 01:19:47.001 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 01:19:47.001 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 01:19:47.001 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:19:47.001 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 01:19:47.001 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 01:19:47.001 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 01:19:47.001 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 01:19:47.001 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=78f7a6aaf77ed6cb87842f58cb763693b31b38c142fccdf1 01:19:47.001 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 01:19:47.001 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.GB5 01:19:47.001 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 78f7a6aaf77ed6cb87842f58cb763693b31b38c142fccdf1 0 01:19:47.001 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 78f7a6aaf77ed6cb87842f58cb763693b31b38c142fccdf1 0 01:19:47.001 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 01:19:47.001 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 01:19:47.001 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=78f7a6aaf77ed6cb87842f58cb763693b31b38c142fccdf1 01:19:47.001 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 01:19:47.001 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 01:19:47.001 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.GB5 01:19:47.001 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.GB5 01:19:47.001 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.GB5 01:19:47.001 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 01:19:47.001 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 01:19:47.001 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:19:47.001 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 01:19:47.001 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 01:19:47.001 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 01:19:47.001 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 01:19:47.001 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b5e3c7fe224e3d7df1d9c5d6c1e8b9c8542c9096d8a422eaf4b10e6d285f9849 01:19:47.001 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 01:19:47.001 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.TA8 01:19:47.001 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b5e3c7fe224e3d7df1d9c5d6c1e8b9c8542c9096d8a422eaf4b10e6d285f9849 3 01:19:47.001 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b5e3c7fe224e3d7df1d9c5d6c1e8b9c8542c9096d8a422eaf4b10e6d285f9849 3 01:19:47.001 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 01:19:47.001 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 01:19:47.001 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b5e3c7fe224e3d7df1d9c5d6c1e8b9c8542c9096d8a422eaf4b10e6d285f9849 01:19:47.001 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 01:19:47.001 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 01:19:47.001 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.TA8 01:19:47.001 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.TA8 01:19:47.001 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.TA8 01:19:47.001 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 01:19:47.001 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 01:19:47.001 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:19:47.001 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 01:19:47.001 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 01:19:47.001 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 01:19:47.001 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 01:19:47.001 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=62b4c952c1831aeeb19a0fa41b6f63aa 01:19:47.001 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 01:19:47.001 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.305 01:19:47.001 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 62b4c952c1831aeeb19a0fa41b6f63aa 1 01:19:47.001 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 62b4c952c1831aeeb19a0fa41b6f63aa 1 01:19:47.001 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 01:19:47.001 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 01:19:47.001 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=62b4c952c1831aeeb19a0fa41b6f63aa 01:19:47.001 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 01:19:47.001 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 01:19:47.261 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.305 01:19:47.261 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.305 01:19:47.261 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.305 01:19:47.261 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 01:19:47.261 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 01:19:47.261 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:19:47.261 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 01:19:47.261 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 01:19:47.261 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 01:19:47.261 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 01:19:47.261 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=32a228f946e4e7bd981d3b7ed86dcfd5108fa38a932f5ebb 01:19:47.261 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 01:19:47.261 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Yq0 01:19:47.261 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 32a228f946e4e7bd981d3b7ed86dcfd5108fa38a932f5ebb 2 01:19:47.261 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 32a228f946e4e7bd981d3b7ed86dcfd5108fa38a932f5ebb 2 01:19:47.261 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 01:19:47.261 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 01:19:47.261 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=32a228f946e4e7bd981d3b7ed86dcfd5108fa38a932f5ebb 01:19:47.261 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 01:19:47.261 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 01:19:47.261 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Yq0 01:19:47.261 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Yq0 01:19:47.261 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.Yq0 01:19:47.261 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 01:19:47.261 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 01:19:47.261 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:19:47.261 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 01:19:47.261 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 01:19:47.261 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 01:19:47.261 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 01:19:47.261 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=37c4c1006b816903a000891513f88d94e82a4dcab3ab7bc9 01:19:47.261 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 01:19:47.261 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.BTy 01:19:47.261 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 37c4c1006b816903a000891513f88d94e82a4dcab3ab7bc9 2 01:19:47.261 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 37c4c1006b816903a000891513f88d94e82a4dcab3ab7bc9 2 01:19:47.261 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 01:19:47.261 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 01:19:47.261 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=37c4c1006b816903a000891513f88d94e82a4dcab3ab7bc9 01:19:47.261 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 01:19:47.261 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 01:19:47.261 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.BTy 01:19:47.261 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.BTy 01:19:47.261 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.BTy 01:19:47.261 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 01:19:47.261 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 01:19:47.261 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:19:47.261 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 01:19:47.261 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 01:19:47.261 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 01:19:47.261 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 01:19:47.261 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=26a0187a6cc5fb021493c4920ff9251a 01:19:47.261 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 01:19:47.261 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.eYa 01:19:47.261 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 26a0187a6cc5fb021493c4920ff9251a 1 01:19:47.261 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 26a0187a6cc5fb021493c4920ff9251a 1 01:19:47.261 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 01:19:47.261 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 01:19:47.261 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=26a0187a6cc5fb021493c4920ff9251a 01:19:47.261 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 01:19:47.261 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 01:19:47.261 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.eYa 01:19:47.261 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.eYa 01:19:47.261 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.eYa 01:19:47.261 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 01:19:47.261 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 01:19:47.261 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:19:47.261 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 01:19:47.261 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 01:19:47.261 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 01:19:47.521 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 01:19:47.521 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=79f1efd269cd1a0bed6958e5026050836bdc9abdfa52a2fb645b37ab8c1cfa1e 01:19:47.521 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 01:19:47.521 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.dQ3 01:19:47.521 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 79f1efd269cd1a0bed6958e5026050836bdc9abdfa52a2fb645b37ab8c1cfa1e 3 01:19:47.521 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 79f1efd269cd1a0bed6958e5026050836bdc9abdfa52a2fb645b37ab8c1cfa1e 3 01:19:47.521 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 01:19:47.521 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 01:19:47.521 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=79f1efd269cd1a0bed6958e5026050836bdc9abdfa52a2fb645b37ab8c1cfa1e 01:19:47.521 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 01:19:47.521 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 01:19:47.521 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.dQ3 01:19:47.521 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.dQ3 01:19:47.521 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.dQ3 01:19:47.521 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 01:19:47.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:19:47.521 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 67351 01:19:47.521 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 67351 ']' 01:19:47.521 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:19:47.521 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 01:19:47.521 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:19:47.521 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 01:19:47.521 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:19:47.780 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:19:47.780 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 01:19:47.780 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 67382 /var/tmp/host.sock 01:19:47.780 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 67382 ']' 01:19:47.780 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 01:19:47.780 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 01:19:47.780 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 01:19:47.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 01:19:47.780 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 01:19:47.780 05:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:19:47.780 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:19:47.780 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 01:19:47.780 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 01:19:47.780 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:19:47.780 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:19:47.780 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:19:47.780 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 01:19:47.780 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.GB5 01:19:47.780 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:19:47.780 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:19:47.780 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:19:47.780 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.GB5 01:19:47.780 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.GB5 01:19:48.039 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.TA8 ]] 01:19:48.039 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.TA8 01:19:48.039 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:19:48.039 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:19:48.039 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:19:48.039 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.TA8 01:19:48.039 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.TA8 01:19:48.298 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 01:19:48.298 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.305 01:19:48.298 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:19:48.298 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:19:48.298 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:19:48.298 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.305 01:19:48.299 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.305 01:19:48.558 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.Yq0 ]] 01:19:48.558 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Yq0 01:19:48.558 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:19:48.558 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:19:48.558 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:19:48.558 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Yq0 01:19:48.558 05:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Yq0 01:19:48.817 05:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 01:19:48.817 05:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.BTy 01:19:48.817 05:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:19:48.817 05:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:19:48.817 05:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:19:48.817 05:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.BTy 01:19:48.817 05:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.BTy 01:19:49.076 05:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.eYa ]] 01:19:49.076 05:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.eYa 01:19:49.076 05:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:19:49.076 05:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:19:49.076 05:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:19:49.076 05:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.eYa 01:19:49.076 05:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.eYa 01:19:49.335 05:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 01:19:49.335 05:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.dQ3 01:19:49.335 05:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:19:49.335 05:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:19:49.335 05:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:19:49.335 05:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.dQ3 01:19:49.335 05:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.dQ3 01:19:49.335 05:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 01:19:49.335 05:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 01:19:49.335 05:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 01:19:49.335 05:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:19:49.335 05:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 01:19:49.335 05:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 01:19:49.594 05:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 01:19:49.594 05:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:19:49.594 05:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 01:19:49.594 05:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 01:19:49.594 05:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 01:19:49.594 05:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:19:49.594 05:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:19:49.594 05:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:19:49.594 05:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:19:49.594 05:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:19:49.594 05:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:19:49.853 05:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:19:49.853 05:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:19:50.113 01:19:50.113 05:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:19:50.113 05:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:19:50.113 05:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:19:50.113 05:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:19:50.113 05:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:19:50.113 05:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:19:50.113 05:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:19:50.113 05:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:19:50.113 05:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:19:50.113 { 01:19:50.113 "cntlid": 1, 01:19:50.113 "qid": 0, 01:19:50.113 "state": "enabled", 01:19:50.113 "thread": "nvmf_tgt_poll_group_000", 01:19:50.114 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b", 01:19:50.114 "listen_address": { 01:19:50.114 "trtype": "TCP", 01:19:50.114 "adrfam": "IPv4", 01:19:50.114 "traddr": "10.0.0.3", 01:19:50.114 "trsvcid": "4420" 01:19:50.114 }, 01:19:50.114 "peer_address": { 01:19:50.114 "trtype": "TCP", 01:19:50.114 "adrfam": "IPv4", 01:19:50.114 "traddr": "10.0.0.1", 01:19:50.114 "trsvcid": "59006" 01:19:50.114 }, 01:19:50.114 "auth": { 01:19:50.114 "state": "completed", 01:19:50.114 "digest": "sha256", 01:19:50.114 "dhgroup": "null" 01:19:50.114 } 01:19:50.114 } 01:19:50.114 ]' 01:19:50.373 05:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:19:50.373 05:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:19:50.373 05:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:19:50.373 05:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 01:19:50.373 05:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:19:50.373 05:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:19:50.373 05:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:19:50.373 05:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:19:50.633 05:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzhmN2E2YWFmNzdlZDZjYjg3ODQyZjU4Y2I3NjM2OTNiMzFiMzhjMTQyZmNjZGYxzNn33Q==: --dhchap-ctrl-secret DHHC-1:03:YjVlM2M3ZmUyMjRlM2Q3ZGYxZDljNWQ2YzFlOGI5Yzg1NDJjOTA5NmQ4YTQyMmVhZjRiMTBlNmQyODVmOTg0OT7Mb6E=: 01:19:50.633 05:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid 0567fff2-ddaf-4a4f-877d-a2600d7e662b -l 0 --dhchap-secret DHHC-1:00:NzhmN2E2YWFmNzdlZDZjYjg3ODQyZjU4Y2I3NjM2OTNiMzFiMzhjMTQyZmNjZGYxzNn33Q==: --dhchap-ctrl-secret DHHC-1:03:YjVlM2M3ZmUyMjRlM2Q3ZGYxZDljNWQ2YzFlOGI5Yzg1NDJjOTA5NmQ4YTQyMmVhZjRiMTBlNmQyODVmOTg0OT7Mb6E=: 01:19:54.830 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:19:54.830 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:19:54.830 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:19:54.830 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:19:54.830 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:19:54.830 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:19:54.830 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:19:54.830 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 01:19:54.830 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 01:19:54.830 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 01:19:54.830 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:19:54.830 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 01:19:54.830 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 01:19:54.830 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 01:19:54.830 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:19:54.830 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:19:54.831 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:19:54.831 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:19:54.831 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:19:54.831 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:19:54.831 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:19:54.831 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:19:54.831 01:19:54.831 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:19:54.831 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:19:54.831 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:19:55.091 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:19:55.091 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:19:55.091 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:19:55.091 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:19:55.091 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:19:55.091 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:19:55.091 { 01:19:55.091 "cntlid": 3, 01:19:55.091 "qid": 0, 01:19:55.091 "state": "enabled", 01:19:55.091 "thread": "nvmf_tgt_poll_group_000", 01:19:55.091 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b", 01:19:55.091 "listen_address": { 01:19:55.091 "trtype": "TCP", 01:19:55.091 "adrfam": "IPv4", 01:19:55.091 "traddr": "10.0.0.3", 01:19:55.091 "trsvcid": "4420" 01:19:55.091 }, 01:19:55.091 "peer_address": { 01:19:55.091 "trtype": "TCP", 01:19:55.091 "adrfam": "IPv4", 01:19:55.091 "traddr": "10.0.0.1", 01:19:55.091 "trsvcid": "59036" 01:19:55.091 }, 01:19:55.091 "auth": { 01:19:55.091 "state": "completed", 01:19:55.091 "digest": "sha256", 01:19:55.091 "dhgroup": "null" 01:19:55.091 } 01:19:55.091 } 01:19:55.091 ]' 01:19:55.091 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:19:55.091 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:19:55.091 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:19:55.091 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 01:19:55.091 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:19:55.091 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:19:55.091 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:19:55.091 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:19:55.351 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjJiNGM5NTJjMTgzMWFlZWIxOWEwZmE0MWI2ZjYzYWFcYrja: --dhchap-ctrl-secret DHHC-1:02:MzJhMjI4Zjk0NmU0ZTdiZDk4MWQzYjdlZDg2ZGNmZDUxMDhmYTM4YTkzMmY1ZWJi1Ai+GA==: 01:19:55.351 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid 0567fff2-ddaf-4a4f-877d-a2600d7e662b -l 0 --dhchap-secret DHHC-1:01:NjJiNGM5NTJjMTgzMWFlZWIxOWEwZmE0MWI2ZjYzYWFcYrja: --dhchap-ctrl-secret DHHC-1:02:MzJhMjI4Zjk0NmU0ZTdiZDk4MWQzYjdlZDg2ZGNmZDUxMDhmYTM4YTkzMmY1ZWJi1Ai+GA==: 01:19:55.919 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:19:55.919 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:19:55.919 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:19:55.919 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:19:55.919 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:19:56.178 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:19:56.178 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:19:56.178 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 01:19:56.178 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 01:19:56.178 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 01:19:56.178 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:19:56.178 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 01:19:56.179 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 01:19:56.179 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 01:19:56.179 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:19:56.179 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:19:56.179 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:19:56.179 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:19:56.179 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:19:56.179 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:19:56.179 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:19:56.179 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:19:56.747 01:19:56.747 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:19:56.747 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:19:56.747 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:19:56.747 05:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:19:56.747 05:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:19:56.747 05:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:19:56.747 05:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:19:56.747 05:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:19:56.747 05:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:19:56.747 { 01:19:56.747 "cntlid": 5, 01:19:56.747 "qid": 0, 01:19:56.747 "state": "enabled", 01:19:56.747 "thread": "nvmf_tgt_poll_group_000", 01:19:56.747 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b", 01:19:56.747 "listen_address": { 01:19:56.747 "trtype": "TCP", 01:19:56.747 "adrfam": "IPv4", 01:19:56.747 "traddr": "10.0.0.3", 01:19:56.747 "trsvcid": "4420" 01:19:56.747 }, 01:19:56.747 "peer_address": { 01:19:56.747 "trtype": "TCP", 01:19:56.747 "adrfam": "IPv4", 01:19:56.747 "traddr": "10.0.0.1", 01:19:56.747 "trsvcid": "59048" 01:19:56.747 }, 01:19:56.747 "auth": { 01:19:56.747 "state": "completed", 01:19:56.747 "digest": "sha256", 01:19:56.747 "dhgroup": "null" 01:19:56.747 } 01:19:56.747 } 01:19:56.747 ]' 01:19:56.747 05:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:19:57.007 05:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:19:57.007 05:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:19:57.007 05:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 01:19:57.007 05:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:19:57.007 05:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:19:57.007 05:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:19:57.007 05:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:19:57.267 05:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzdjNGMxMDA2YjgxNjkwM2EwMDA4OTE1MTNmODhkOTRlODJhNGRjYWIzYWI3YmM5CzI+UQ==: --dhchap-ctrl-secret DHHC-1:01:MjZhMDE4N2E2Y2M1ZmIwMjE0OTNjNDkyMGZmOTI1MWFTSBIt: 01:19:57.267 05:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid 0567fff2-ddaf-4a4f-877d-a2600d7e662b -l 0 --dhchap-secret DHHC-1:02:MzdjNGMxMDA2YjgxNjkwM2EwMDA4OTE1MTNmODhkOTRlODJhNGRjYWIzYWI3YmM5CzI+UQ==: --dhchap-ctrl-secret DHHC-1:01:MjZhMDE4N2E2Y2M1ZmIwMjE0OTNjNDkyMGZmOTI1MWFTSBIt: 01:19:57.837 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:19:57.837 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:19:57.837 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:19:57.837 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:19:57.837 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:19:57.837 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:19:57.837 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:19:57.837 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 01:19:57.837 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 01:19:58.096 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 01:19:58.096 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:19:58.096 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 01:19:58.096 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 01:19:58.096 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 01:19:58.096 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:19:58.096 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key3 01:19:58.096 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:19:58.096 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:19:58.096 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:19:58.096 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 01:19:58.096 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:19:58.096 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:19:58.355 01:19:58.355 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:19:58.355 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:19:58.355 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:19:58.613 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:19:58.613 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:19:58.613 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:19:58.613 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:19:58.613 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:19:58.613 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:19:58.613 { 01:19:58.613 "cntlid": 7, 01:19:58.613 "qid": 0, 01:19:58.613 "state": "enabled", 01:19:58.613 "thread": "nvmf_tgt_poll_group_000", 01:19:58.613 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b", 01:19:58.613 "listen_address": { 01:19:58.613 "trtype": "TCP", 01:19:58.613 "adrfam": "IPv4", 01:19:58.613 "traddr": "10.0.0.3", 01:19:58.613 "trsvcid": "4420" 01:19:58.613 }, 01:19:58.613 "peer_address": { 01:19:58.613 "trtype": "TCP", 01:19:58.613 "adrfam": "IPv4", 01:19:58.613 "traddr": "10.0.0.1", 01:19:58.613 "trsvcid": "59074" 01:19:58.613 }, 01:19:58.613 "auth": { 01:19:58.613 "state": "completed", 01:19:58.613 "digest": "sha256", 01:19:58.613 "dhgroup": "null" 01:19:58.613 } 01:19:58.613 } 01:19:58.613 ]' 01:19:58.613 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:19:58.613 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:19:58.613 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:19:58.613 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 01:19:58.613 05:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:19:58.613 05:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:19:58.613 05:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:19:58.613 05:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:19:58.871 05:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzlmMWVmZDI2OWNkMWEwYmVkNjk1OGU1MDI2MDUwODM2YmRjOWFiZGZhNTJhMmZiNjQ1YjM3YWI4YzFjZmExZXKHK6Y=: 01:19:58.872 05:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid 0567fff2-ddaf-4a4f-877d-a2600d7e662b -l 0 --dhchap-secret DHHC-1:03:NzlmMWVmZDI2OWNkMWEwYmVkNjk1OGU1MDI2MDUwODM2YmRjOWFiZGZhNTJhMmZiNjQ1YjM3YWI4YzFjZmExZXKHK6Y=: 01:19:59.440 05:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:19:59.440 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:19:59.440 05:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:19:59.440 05:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:19:59.440 05:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:19:59.440 05:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:19:59.440 05:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 01:19:59.440 05:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:19:59.440 05:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:19:59.440 05:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:19:59.699 05:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 01:19:59.699 05:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:19:59.699 05:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 01:19:59.699 05:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 01:19:59.699 05:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 01:19:59.699 05:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:19:59.699 05:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:19:59.699 05:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:19:59.699 05:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:19:59.699 05:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:19:59.699 05:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:19:59.699 05:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:19:59.699 05:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:19:59.957 01:19:59.957 05:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:19:59.957 05:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:19:59.957 05:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:20:00.216 05:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:20:00.216 05:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:20:00.216 05:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:00.216 05:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:00.216 05:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:00.216 05:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:20:00.216 { 01:20:00.216 "cntlid": 9, 01:20:00.216 "qid": 0, 01:20:00.216 "state": "enabled", 01:20:00.216 "thread": "nvmf_tgt_poll_group_000", 01:20:00.216 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b", 01:20:00.216 "listen_address": { 01:20:00.216 "trtype": "TCP", 01:20:00.216 "adrfam": "IPv4", 01:20:00.216 "traddr": "10.0.0.3", 01:20:00.216 "trsvcid": "4420" 01:20:00.216 }, 01:20:00.216 "peer_address": { 01:20:00.216 "trtype": "TCP", 01:20:00.216 "adrfam": "IPv4", 01:20:00.216 "traddr": "10.0.0.1", 01:20:00.216 "trsvcid": "41174" 01:20:00.216 }, 01:20:00.216 "auth": { 01:20:00.216 "state": "completed", 01:20:00.216 "digest": "sha256", 01:20:00.216 "dhgroup": "ffdhe2048" 01:20:00.216 } 01:20:00.216 } 01:20:00.216 ]' 01:20:00.216 05:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:20:00.476 05:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:20:00.476 05:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:20:00.476 05:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 01:20:00.476 05:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:20:00.476 05:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:20:00.476 05:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:20:00.476 05:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:20:00.796 05:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzhmN2E2YWFmNzdlZDZjYjg3ODQyZjU4Y2I3NjM2OTNiMzFiMzhjMTQyZmNjZGYxzNn33Q==: --dhchap-ctrl-secret DHHC-1:03:YjVlM2M3ZmUyMjRlM2Q3ZGYxZDljNWQ2YzFlOGI5Yzg1NDJjOTA5NmQ4YTQyMmVhZjRiMTBlNmQyODVmOTg0OT7Mb6E=: 01:20:00.796 05:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid 0567fff2-ddaf-4a4f-877d-a2600d7e662b -l 0 --dhchap-secret DHHC-1:00:NzhmN2E2YWFmNzdlZDZjYjg3ODQyZjU4Y2I3NjM2OTNiMzFiMzhjMTQyZmNjZGYxzNn33Q==: --dhchap-ctrl-secret DHHC-1:03:YjVlM2M3ZmUyMjRlM2Q3ZGYxZDljNWQ2YzFlOGI5Yzg1NDJjOTA5NmQ4YTQyMmVhZjRiMTBlNmQyODVmOTg0OT7Mb6E=: 01:20:01.366 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:20:01.366 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:20:01.366 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:20:01.366 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:01.366 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:01.366 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:01.366 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:20:01.366 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:20:01.366 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:20:01.366 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 01:20:01.366 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:20:01.366 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 01:20:01.366 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 01:20:01.366 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 01:20:01.366 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:20:01.366 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:20:01.366 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:01.366 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:01.366 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:01.366 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:20:01.366 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:20:01.366 05:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:20:01.935 01:20:01.935 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:20:01.935 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:20:01.935 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:20:01.936 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:20:01.936 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:20:01.936 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:01.936 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:01.936 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:01.936 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:20:01.936 { 01:20:01.936 "cntlid": 11, 01:20:01.936 "qid": 0, 01:20:01.936 "state": "enabled", 01:20:01.936 "thread": "nvmf_tgt_poll_group_000", 01:20:01.936 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b", 01:20:01.936 "listen_address": { 01:20:01.936 "trtype": "TCP", 01:20:01.936 "adrfam": "IPv4", 01:20:01.936 "traddr": "10.0.0.3", 01:20:01.936 "trsvcid": "4420" 01:20:01.936 }, 01:20:01.936 "peer_address": { 01:20:01.936 "trtype": "TCP", 01:20:01.936 "adrfam": "IPv4", 01:20:01.936 "traddr": "10.0.0.1", 01:20:01.936 "trsvcid": "41200" 01:20:01.936 }, 01:20:01.936 "auth": { 01:20:01.936 "state": "completed", 01:20:01.936 "digest": "sha256", 01:20:01.936 "dhgroup": "ffdhe2048" 01:20:01.936 } 01:20:01.936 } 01:20:01.936 ]' 01:20:01.936 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:20:01.936 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:20:01.936 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:20:02.195 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 01:20:02.195 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:20:02.196 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:20:02.196 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:20:02.196 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:20:02.455 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjJiNGM5NTJjMTgzMWFlZWIxOWEwZmE0MWI2ZjYzYWFcYrja: --dhchap-ctrl-secret DHHC-1:02:MzJhMjI4Zjk0NmU0ZTdiZDk4MWQzYjdlZDg2ZGNmZDUxMDhmYTM4YTkzMmY1ZWJi1Ai+GA==: 01:20:02.455 05:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid 0567fff2-ddaf-4a4f-877d-a2600d7e662b -l 0 --dhchap-secret DHHC-1:01:NjJiNGM5NTJjMTgzMWFlZWIxOWEwZmE0MWI2ZjYzYWFcYrja: --dhchap-ctrl-secret DHHC-1:02:MzJhMjI4Zjk0NmU0ZTdiZDk4MWQzYjdlZDg2ZGNmZDUxMDhmYTM4YTkzMmY1ZWJi1Ai+GA==: 01:20:03.024 05:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:20:03.024 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:20:03.024 05:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:20:03.024 05:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:03.024 05:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:03.024 05:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:03.024 05:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:20:03.024 05:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:20:03.024 05:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:20:03.284 05:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 01:20:03.284 05:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:20:03.284 05:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 01:20:03.284 05:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 01:20:03.284 05:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 01:20:03.284 05:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:20:03.284 05:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:20:03.284 05:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:03.284 05:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:03.284 05:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:03.284 05:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:20:03.284 05:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:20:03.284 05:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:20:03.544 01:20:03.544 05:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:20:03.544 05:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:20:03.544 05:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:20:03.804 05:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:20:03.804 05:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:20:03.804 05:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:03.804 05:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:03.804 05:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:03.804 05:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:20:03.804 { 01:20:03.804 "cntlid": 13, 01:20:03.804 "qid": 0, 01:20:03.804 "state": "enabled", 01:20:03.804 "thread": "nvmf_tgt_poll_group_000", 01:20:03.804 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b", 01:20:03.804 "listen_address": { 01:20:03.804 "trtype": "TCP", 01:20:03.804 "adrfam": "IPv4", 01:20:03.804 "traddr": "10.0.0.3", 01:20:03.804 "trsvcid": "4420" 01:20:03.804 }, 01:20:03.804 "peer_address": { 01:20:03.804 "trtype": "TCP", 01:20:03.804 "adrfam": "IPv4", 01:20:03.804 "traddr": "10.0.0.1", 01:20:03.804 "trsvcid": "41220" 01:20:03.804 }, 01:20:03.804 "auth": { 01:20:03.804 "state": "completed", 01:20:03.804 "digest": "sha256", 01:20:03.804 "dhgroup": "ffdhe2048" 01:20:03.804 } 01:20:03.804 } 01:20:03.804 ]' 01:20:03.804 05:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:20:03.804 05:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:20:03.804 05:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:20:03.804 05:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 01:20:03.804 05:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:20:03.804 05:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:20:03.804 05:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:20:03.804 05:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:20:04.063 05:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzdjNGMxMDA2YjgxNjkwM2EwMDA4OTE1MTNmODhkOTRlODJhNGRjYWIzYWI3YmM5CzI+UQ==: --dhchap-ctrl-secret DHHC-1:01:MjZhMDE4N2E2Y2M1ZmIwMjE0OTNjNDkyMGZmOTI1MWFTSBIt: 01:20:04.063 05:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid 0567fff2-ddaf-4a4f-877d-a2600d7e662b -l 0 --dhchap-secret DHHC-1:02:MzdjNGMxMDA2YjgxNjkwM2EwMDA4OTE1MTNmODhkOTRlODJhNGRjYWIzYWI3YmM5CzI+UQ==: --dhchap-ctrl-secret DHHC-1:01:MjZhMDE4N2E2Y2M1ZmIwMjE0OTNjNDkyMGZmOTI1MWFTSBIt: 01:20:04.630 05:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:20:04.630 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:20:04.630 05:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:20:04.630 05:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:04.630 05:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:04.630 05:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:04.630 05:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:20:04.630 05:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:20:04.630 05:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:20:04.889 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 01:20:04.889 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:20:04.889 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 01:20:04.889 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 01:20:04.889 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 01:20:04.889 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:20:04.889 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key3 01:20:04.889 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:04.889 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:04.889 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:04.889 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 01:20:04.889 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:20:04.889 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:20:05.148 01:20:05.148 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:20:05.148 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:20:05.148 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:20:05.408 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:20:05.408 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:20:05.408 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:05.408 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:05.408 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:05.408 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:20:05.408 { 01:20:05.408 "cntlid": 15, 01:20:05.408 "qid": 0, 01:20:05.408 "state": "enabled", 01:20:05.408 "thread": "nvmf_tgt_poll_group_000", 01:20:05.408 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b", 01:20:05.408 "listen_address": { 01:20:05.408 "trtype": "TCP", 01:20:05.408 "adrfam": "IPv4", 01:20:05.408 "traddr": "10.0.0.3", 01:20:05.408 "trsvcid": "4420" 01:20:05.408 }, 01:20:05.408 "peer_address": { 01:20:05.408 "trtype": "TCP", 01:20:05.408 "adrfam": "IPv4", 01:20:05.408 "traddr": "10.0.0.1", 01:20:05.408 "trsvcid": "41250" 01:20:05.408 }, 01:20:05.408 "auth": { 01:20:05.408 "state": "completed", 01:20:05.408 "digest": "sha256", 01:20:05.408 "dhgroup": "ffdhe2048" 01:20:05.408 } 01:20:05.408 } 01:20:05.408 ]' 01:20:05.408 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:20:05.408 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:20:05.408 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:20:05.408 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 01:20:05.408 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:20:05.408 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:20:05.668 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:20:05.668 05:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:20:05.668 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzlmMWVmZDI2OWNkMWEwYmVkNjk1OGU1MDI2MDUwODM2YmRjOWFiZGZhNTJhMmZiNjQ1YjM3YWI4YzFjZmExZXKHK6Y=: 01:20:05.668 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid 0567fff2-ddaf-4a4f-877d-a2600d7e662b -l 0 --dhchap-secret DHHC-1:03:NzlmMWVmZDI2OWNkMWEwYmVkNjk1OGU1MDI2MDUwODM2YmRjOWFiZGZhNTJhMmZiNjQ1YjM3YWI4YzFjZmExZXKHK6Y=: 01:20:06.607 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:20:06.607 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:20:06.607 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:20:06.607 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:06.607 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:06.607 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:06.607 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 01:20:06.607 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:20:06.607 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 01:20:06.607 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 01:20:06.607 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 01:20:06.607 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:20:06.607 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 01:20:06.607 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 01:20:06.607 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 01:20:06.607 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:20:06.607 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:20:06.607 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:06.607 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:06.607 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:06.607 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:20:06.607 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:20:06.607 05:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:20:06.866 01:20:06.866 05:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:20:06.866 05:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:20:06.866 05:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:20:07.125 05:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:20:07.125 05:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:20:07.125 05:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:07.125 05:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:07.125 05:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:07.125 05:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:20:07.125 { 01:20:07.125 "cntlid": 17, 01:20:07.125 "qid": 0, 01:20:07.125 "state": "enabled", 01:20:07.125 "thread": "nvmf_tgt_poll_group_000", 01:20:07.125 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b", 01:20:07.125 "listen_address": { 01:20:07.125 "trtype": "TCP", 01:20:07.125 "adrfam": "IPv4", 01:20:07.125 "traddr": "10.0.0.3", 01:20:07.125 "trsvcid": "4420" 01:20:07.125 }, 01:20:07.125 "peer_address": { 01:20:07.125 "trtype": "TCP", 01:20:07.125 "adrfam": "IPv4", 01:20:07.125 "traddr": "10.0.0.1", 01:20:07.125 "trsvcid": "41276" 01:20:07.125 }, 01:20:07.125 "auth": { 01:20:07.125 "state": "completed", 01:20:07.125 "digest": "sha256", 01:20:07.125 "dhgroup": "ffdhe3072" 01:20:07.125 } 01:20:07.125 } 01:20:07.125 ]' 01:20:07.125 05:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:20:07.125 05:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:20:07.125 05:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:20:07.384 05:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 01:20:07.384 05:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:20:07.384 05:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:20:07.384 05:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:20:07.384 05:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:20:07.643 05:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzhmN2E2YWFmNzdlZDZjYjg3ODQyZjU4Y2I3NjM2OTNiMzFiMzhjMTQyZmNjZGYxzNn33Q==: --dhchap-ctrl-secret DHHC-1:03:YjVlM2M3ZmUyMjRlM2Q3ZGYxZDljNWQ2YzFlOGI5Yzg1NDJjOTA5NmQ4YTQyMmVhZjRiMTBlNmQyODVmOTg0OT7Mb6E=: 01:20:07.643 05:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid 0567fff2-ddaf-4a4f-877d-a2600d7e662b -l 0 --dhchap-secret DHHC-1:00:NzhmN2E2YWFmNzdlZDZjYjg3ODQyZjU4Y2I3NjM2OTNiMzFiMzhjMTQyZmNjZGYxzNn33Q==: --dhchap-ctrl-secret DHHC-1:03:YjVlM2M3ZmUyMjRlM2Q3ZGYxZDljNWQ2YzFlOGI5Yzg1NDJjOTA5NmQ4YTQyMmVhZjRiMTBlNmQyODVmOTg0OT7Mb6E=: 01:20:08.211 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:20:08.211 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:20:08.211 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:20:08.211 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:08.211 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:08.211 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:08.211 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:20:08.212 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 01:20:08.212 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 01:20:08.470 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 01:20:08.470 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:20:08.470 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 01:20:08.470 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 01:20:08.470 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 01:20:08.470 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:20:08.470 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:20:08.470 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:08.470 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:08.470 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:08.470 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:20:08.470 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:20:08.471 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:20:08.741 01:20:08.741 05:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:20:08.741 05:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:20:08.741 05:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:20:09.015 05:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:20:09.015 05:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:20:09.015 05:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:09.015 05:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:09.015 05:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:09.015 05:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:20:09.015 { 01:20:09.015 "cntlid": 19, 01:20:09.015 "qid": 0, 01:20:09.015 "state": "enabled", 01:20:09.015 "thread": "nvmf_tgt_poll_group_000", 01:20:09.015 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b", 01:20:09.015 "listen_address": { 01:20:09.015 "trtype": "TCP", 01:20:09.015 "adrfam": "IPv4", 01:20:09.015 "traddr": "10.0.0.3", 01:20:09.015 "trsvcid": "4420" 01:20:09.015 }, 01:20:09.015 "peer_address": { 01:20:09.015 "trtype": "TCP", 01:20:09.015 "adrfam": "IPv4", 01:20:09.015 "traddr": "10.0.0.1", 01:20:09.015 "trsvcid": "41314" 01:20:09.015 }, 01:20:09.015 "auth": { 01:20:09.015 "state": "completed", 01:20:09.015 "digest": "sha256", 01:20:09.015 "dhgroup": "ffdhe3072" 01:20:09.015 } 01:20:09.015 } 01:20:09.015 ]' 01:20:09.015 05:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:20:09.015 05:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:20:09.015 05:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:20:09.015 05:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 01:20:09.015 05:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:20:09.015 05:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:20:09.015 05:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:20:09.015 05:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:20:09.274 05:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjJiNGM5NTJjMTgzMWFlZWIxOWEwZmE0MWI2ZjYzYWFcYrja: --dhchap-ctrl-secret DHHC-1:02:MzJhMjI4Zjk0NmU0ZTdiZDk4MWQzYjdlZDg2ZGNmZDUxMDhmYTM4YTkzMmY1ZWJi1Ai+GA==: 01:20:09.274 05:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid 0567fff2-ddaf-4a4f-877d-a2600d7e662b -l 0 --dhchap-secret DHHC-1:01:NjJiNGM5NTJjMTgzMWFlZWIxOWEwZmE0MWI2ZjYzYWFcYrja: --dhchap-ctrl-secret DHHC-1:02:MzJhMjI4Zjk0NmU0ZTdiZDk4MWQzYjdlZDg2ZGNmZDUxMDhmYTM4YTkzMmY1ZWJi1Ai+GA==: 01:20:09.845 05:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:20:09.845 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:20:09.845 05:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:20:09.845 05:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:09.845 05:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:09.845 05:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:09.845 05:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:20:09.845 05:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 01:20:09.845 05:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 01:20:10.104 05:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 01:20:10.104 05:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:20:10.104 05:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 01:20:10.104 05:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 01:20:10.104 05:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 01:20:10.104 05:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:20:10.104 05:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:20:10.104 05:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:10.104 05:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:10.104 05:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:10.104 05:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:20:10.104 05:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:20:10.104 05:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:20:10.363 01:20:10.363 05:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:20:10.363 05:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:20:10.363 05:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:20:10.623 05:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:20:10.623 05:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:20:10.623 05:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:10.623 05:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:10.623 05:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:10.623 05:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:20:10.623 { 01:20:10.623 "cntlid": 21, 01:20:10.623 "qid": 0, 01:20:10.623 "state": "enabled", 01:20:10.623 "thread": "nvmf_tgt_poll_group_000", 01:20:10.623 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b", 01:20:10.623 "listen_address": { 01:20:10.623 "trtype": "TCP", 01:20:10.623 "adrfam": "IPv4", 01:20:10.623 "traddr": "10.0.0.3", 01:20:10.623 "trsvcid": "4420" 01:20:10.623 }, 01:20:10.623 "peer_address": { 01:20:10.623 "trtype": "TCP", 01:20:10.623 "adrfam": "IPv4", 01:20:10.623 "traddr": "10.0.0.1", 01:20:10.623 "trsvcid": "43658" 01:20:10.623 }, 01:20:10.623 "auth": { 01:20:10.623 "state": "completed", 01:20:10.623 "digest": "sha256", 01:20:10.623 "dhgroup": "ffdhe3072" 01:20:10.623 } 01:20:10.624 } 01:20:10.624 ]' 01:20:10.624 05:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:20:10.624 05:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:20:10.624 05:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:20:10.624 05:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 01:20:10.624 05:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:20:10.624 05:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:20:10.624 05:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:20:10.624 05:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:20:10.883 05:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzdjNGMxMDA2YjgxNjkwM2EwMDA4OTE1MTNmODhkOTRlODJhNGRjYWIzYWI3YmM5CzI+UQ==: --dhchap-ctrl-secret DHHC-1:01:MjZhMDE4N2E2Y2M1ZmIwMjE0OTNjNDkyMGZmOTI1MWFTSBIt: 01:20:10.883 05:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid 0567fff2-ddaf-4a4f-877d-a2600d7e662b -l 0 --dhchap-secret DHHC-1:02:MzdjNGMxMDA2YjgxNjkwM2EwMDA4OTE1MTNmODhkOTRlODJhNGRjYWIzYWI3YmM5CzI+UQ==: --dhchap-ctrl-secret DHHC-1:01:MjZhMDE4N2E2Y2M1ZmIwMjE0OTNjNDkyMGZmOTI1MWFTSBIt: 01:20:11.506 05:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:20:11.506 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:20:11.506 05:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:20:11.506 05:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:11.506 05:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:11.506 05:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:11.506 05:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:20:11.506 05:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 01:20:11.506 05:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 01:20:11.766 05:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 01:20:11.766 05:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:20:11.766 05:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 01:20:11.766 05:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 01:20:11.766 05:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 01:20:11.766 05:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:20:11.766 05:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key3 01:20:11.766 05:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:11.766 05:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:11.766 05:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:11.766 05:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 01:20:11.766 05:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:20:11.766 05:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:20:12.025 01:20:12.025 05:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:20:12.025 05:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:20:12.026 05:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:20:12.286 05:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:20:12.286 05:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:20:12.286 05:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:12.286 05:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:12.286 05:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:12.286 05:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:20:12.286 { 01:20:12.286 "cntlid": 23, 01:20:12.286 "qid": 0, 01:20:12.286 "state": "enabled", 01:20:12.286 "thread": "nvmf_tgt_poll_group_000", 01:20:12.286 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b", 01:20:12.286 "listen_address": { 01:20:12.286 "trtype": "TCP", 01:20:12.286 "adrfam": "IPv4", 01:20:12.286 "traddr": "10.0.0.3", 01:20:12.286 "trsvcid": "4420" 01:20:12.286 }, 01:20:12.286 "peer_address": { 01:20:12.286 "trtype": "TCP", 01:20:12.286 "adrfam": "IPv4", 01:20:12.286 "traddr": "10.0.0.1", 01:20:12.286 "trsvcid": "43682" 01:20:12.286 }, 01:20:12.286 "auth": { 01:20:12.286 "state": "completed", 01:20:12.286 "digest": "sha256", 01:20:12.286 "dhgroup": "ffdhe3072" 01:20:12.286 } 01:20:12.286 } 01:20:12.286 ]' 01:20:12.286 05:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:20:12.286 05:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:20:12.286 05:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:20:12.286 05:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 01:20:12.546 05:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:20:12.546 05:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:20:12.546 05:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:20:12.546 05:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:20:12.811 05:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzlmMWVmZDI2OWNkMWEwYmVkNjk1OGU1MDI2MDUwODM2YmRjOWFiZGZhNTJhMmZiNjQ1YjM3YWI4YzFjZmExZXKHK6Y=: 01:20:12.811 05:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid 0567fff2-ddaf-4a4f-877d-a2600d7e662b -l 0 --dhchap-secret DHHC-1:03:NzlmMWVmZDI2OWNkMWEwYmVkNjk1OGU1MDI2MDUwODM2YmRjOWFiZGZhNTJhMmZiNjQ1YjM3YWI4YzFjZmExZXKHK6Y=: 01:20:13.383 05:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:20:13.383 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:20:13.383 05:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:20:13.383 05:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:13.383 05:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:13.383 05:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:13.383 05:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 01:20:13.383 05:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:20:13.383 05:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 01:20:13.383 05:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 01:20:13.642 05:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 01:20:13.642 05:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:20:13.642 05:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 01:20:13.642 05:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 01:20:13.642 05:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 01:20:13.642 05:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:20:13.642 05:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:20:13.642 05:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:13.642 05:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:13.642 05:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:13.642 05:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:20:13.642 05:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:20:13.642 05:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:20:13.902 01:20:13.902 05:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:20:13.902 05:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:20:13.902 05:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:20:14.161 05:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:20:14.161 05:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:20:14.161 05:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:14.161 05:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:14.161 05:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:14.161 05:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:20:14.161 { 01:20:14.161 "cntlid": 25, 01:20:14.161 "qid": 0, 01:20:14.161 "state": "enabled", 01:20:14.161 "thread": "nvmf_tgt_poll_group_000", 01:20:14.161 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b", 01:20:14.161 "listen_address": { 01:20:14.161 "trtype": "TCP", 01:20:14.161 "adrfam": "IPv4", 01:20:14.161 "traddr": "10.0.0.3", 01:20:14.161 "trsvcid": "4420" 01:20:14.161 }, 01:20:14.161 "peer_address": { 01:20:14.161 "trtype": "TCP", 01:20:14.161 "adrfam": "IPv4", 01:20:14.161 "traddr": "10.0.0.1", 01:20:14.161 "trsvcid": "43710" 01:20:14.161 }, 01:20:14.161 "auth": { 01:20:14.161 "state": "completed", 01:20:14.161 "digest": "sha256", 01:20:14.161 "dhgroup": "ffdhe4096" 01:20:14.161 } 01:20:14.161 } 01:20:14.161 ]' 01:20:14.161 05:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:20:14.161 05:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:20:14.161 05:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:20:14.161 05:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 01:20:14.161 05:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:20:14.161 05:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:20:14.161 05:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:20:14.161 05:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:20:14.420 05:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzhmN2E2YWFmNzdlZDZjYjg3ODQyZjU4Y2I3NjM2OTNiMzFiMzhjMTQyZmNjZGYxzNn33Q==: --dhchap-ctrl-secret DHHC-1:03:YjVlM2M3ZmUyMjRlM2Q3ZGYxZDljNWQ2YzFlOGI5Yzg1NDJjOTA5NmQ4YTQyMmVhZjRiMTBlNmQyODVmOTg0OT7Mb6E=: 01:20:14.420 05:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid 0567fff2-ddaf-4a4f-877d-a2600d7e662b -l 0 --dhchap-secret DHHC-1:00:NzhmN2E2YWFmNzdlZDZjYjg3ODQyZjU4Y2I3NjM2OTNiMzFiMzhjMTQyZmNjZGYxzNn33Q==: --dhchap-ctrl-secret DHHC-1:03:YjVlM2M3ZmUyMjRlM2Q3ZGYxZDljNWQ2YzFlOGI5Yzg1NDJjOTA5NmQ4YTQyMmVhZjRiMTBlNmQyODVmOTg0OT7Mb6E=: 01:20:14.988 05:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:20:14.988 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:20:14.988 05:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:20:14.988 05:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:14.988 05:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:14.988 05:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:14.988 05:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:20:14.988 05:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 01:20:14.988 05:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 01:20:15.247 05:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 01:20:15.247 05:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:20:15.247 05:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 01:20:15.247 05:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 01:20:15.247 05:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 01:20:15.247 05:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:20:15.247 05:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:20:15.247 05:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:15.247 05:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:15.248 05:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:15.248 05:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:20:15.248 05:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:20:15.248 05:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:20:15.589 01:20:15.589 05:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:20:15.589 05:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:20:15.589 05:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:20:15.867 05:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:20:15.867 05:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:20:15.867 05:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:15.867 05:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:15.867 05:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:15.867 05:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:20:15.867 { 01:20:15.867 "cntlid": 27, 01:20:15.867 "qid": 0, 01:20:15.867 "state": "enabled", 01:20:15.867 "thread": "nvmf_tgt_poll_group_000", 01:20:15.867 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b", 01:20:15.867 "listen_address": { 01:20:15.867 "trtype": "TCP", 01:20:15.867 "adrfam": "IPv4", 01:20:15.867 "traddr": "10.0.0.3", 01:20:15.867 "trsvcid": "4420" 01:20:15.867 }, 01:20:15.867 "peer_address": { 01:20:15.867 "trtype": "TCP", 01:20:15.867 "adrfam": "IPv4", 01:20:15.867 "traddr": "10.0.0.1", 01:20:15.867 "trsvcid": "43738" 01:20:15.867 }, 01:20:15.867 "auth": { 01:20:15.867 "state": "completed", 01:20:15.867 "digest": "sha256", 01:20:15.867 "dhgroup": "ffdhe4096" 01:20:15.867 } 01:20:15.867 } 01:20:15.867 ]' 01:20:15.867 05:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:20:15.867 05:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:20:15.867 05:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:20:15.867 05:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 01:20:15.867 05:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:20:16.125 05:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:20:16.125 05:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:20:16.125 05:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:20:16.125 05:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjJiNGM5NTJjMTgzMWFlZWIxOWEwZmE0MWI2ZjYzYWFcYrja: --dhchap-ctrl-secret DHHC-1:02:MzJhMjI4Zjk0NmU0ZTdiZDk4MWQzYjdlZDg2ZGNmZDUxMDhmYTM4YTkzMmY1ZWJi1Ai+GA==: 01:20:16.125 05:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid 0567fff2-ddaf-4a4f-877d-a2600d7e662b -l 0 --dhchap-secret DHHC-1:01:NjJiNGM5NTJjMTgzMWFlZWIxOWEwZmE0MWI2ZjYzYWFcYrja: --dhchap-ctrl-secret DHHC-1:02:MzJhMjI4Zjk0NmU0ZTdiZDk4MWQzYjdlZDg2ZGNmZDUxMDhmYTM4YTkzMmY1ZWJi1Ai+GA==: 01:20:16.693 05:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:20:16.693 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:20:16.693 05:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:20:16.693 05:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:16.693 05:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:16.693 05:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:16.693 05:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:20:16.693 05:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 01:20:16.693 05:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 01:20:16.952 05:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 01:20:16.952 05:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:20:16.952 05:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 01:20:16.952 05:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 01:20:16.952 05:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 01:20:16.952 05:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:20:16.952 05:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:20:16.952 05:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:16.952 05:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:16.952 05:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:16.952 05:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:20:16.952 05:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:20:16.952 05:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:20:17.211 01:20:17.470 05:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:20:17.470 05:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:20:17.470 05:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:20:17.470 05:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:20:17.470 05:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:20:17.470 05:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:17.470 05:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:17.470 05:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:17.470 05:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:20:17.470 { 01:20:17.470 "cntlid": 29, 01:20:17.470 "qid": 0, 01:20:17.470 "state": "enabled", 01:20:17.470 "thread": "nvmf_tgt_poll_group_000", 01:20:17.470 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b", 01:20:17.470 "listen_address": { 01:20:17.470 "trtype": "TCP", 01:20:17.470 "adrfam": "IPv4", 01:20:17.470 "traddr": "10.0.0.3", 01:20:17.470 "trsvcid": "4420" 01:20:17.470 }, 01:20:17.470 "peer_address": { 01:20:17.470 "trtype": "TCP", 01:20:17.470 "adrfam": "IPv4", 01:20:17.470 "traddr": "10.0.0.1", 01:20:17.470 "trsvcid": "43772" 01:20:17.470 }, 01:20:17.470 "auth": { 01:20:17.470 "state": "completed", 01:20:17.470 "digest": "sha256", 01:20:17.470 "dhgroup": "ffdhe4096" 01:20:17.470 } 01:20:17.470 } 01:20:17.470 ]' 01:20:17.470 05:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:20:17.729 05:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:20:17.730 05:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:20:17.730 05:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 01:20:17.730 05:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:20:17.730 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:20:17.730 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:20:17.730 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:20:17.990 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzdjNGMxMDA2YjgxNjkwM2EwMDA4OTE1MTNmODhkOTRlODJhNGRjYWIzYWI3YmM5CzI+UQ==: --dhchap-ctrl-secret DHHC-1:01:MjZhMDE4N2E2Y2M1ZmIwMjE0OTNjNDkyMGZmOTI1MWFTSBIt: 01:20:17.990 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid 0567fff2-ddaf-4a4f-877d-a2600d7e662b -l 0 --dhchap-secret DHHC-1:02:MzdjNGMxMDA2YjgxNjkwM2EwMDA4OTE1MTNmODhkOTRlODJhNGRjYWIzYWI3YmM5CzI+UQ==: --dhchap-ctrl-secret DHHC-1:01:MjZhMDE4N2E2Y2M1ZmIwMjE0OTNjNDkyMGZmOTI1MWFTSBIt: 01:20:18.560 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:20:18.560 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:20:18.560 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:20:18.560 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:18.560 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:18.560 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:18.560 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:20:18.560 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 01:20:18.560 05:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 01:20:18.819 05:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 01:20:18.819 05:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:20:18.819 05:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 01:20:18.819 05:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 01:20:18.819 05:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 01:20:18.819 05:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:20:18.819 05:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key3 01:20:18.819 05:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:18.819 05:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:18.819 05:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:18.819 05:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 01:20:18.819 05:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:20:18.819 05:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:20:19.077 01:20:19.077 05:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:20:19.077 05:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:20:19.077 05:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:20:19.336 05:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:20:19.336 05:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:20:19.336 05:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:19.336 05:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:19.336 05:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:19.336 05:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:20:19.336 { 01:20:19.336 "cntlid": 31, 01:20:19.336 "qid": 0, 01:20:19.336 "state": "enabled", 01:20:19.336 "thread": "nvmf_tgt_poll_group_000", 01:20:19.336 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b", 01:20:19.336 "listen_address": { 01:20:19.336 "trtype": "TCP", 01:20:19.336 "adrfam": "IPv4", 01:20:19.336 "traddr": "10.0.0.3", 01:20:19.336 "trsvcid": "4420" 01:20:19.336 }, 01:20:19.336 "peer_address": { 01:20:19.336 "trtype": "TCP", 01:20:19.336 "adrfam": "IPv4", 01:20:19.336 "traddr": "10.0.0.1", 01:20:19.336 "trsvcid": "43806" 01:20:19.336 }, 01:20:19.336 "auth": { 01:20:19.336 "state": "completed", 01:20:19.336 "digest": "sha256", 01:20:19.336 "dhgroup": "ffdhe4096" 01:20:19.336 } 01:20:19.336 } 01:20:19.336 ]' 01:20:19.336 05:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:20:19.336 05:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:20:19.336 05:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:20:19.595 05:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 01:20:19.595 05:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:20:19.595 05:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:20:19.595 05:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:20:19.595 05:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:20:19.855 05:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzlmMWVmZDI2OWNkMWEwYmVkNjk1OGU1MDI2MDUwODM2YmRjOWFiZGZhNTJhMmZiNjQ1YjM3YWI4YzFjZmExZXKHK6Y=: 01:20:19.855 05:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid 0567fff2-ddaf-4a4f-877d-a2600d7e662b -l 0 --dhchap-secret DHHC-1:03:NzlmMWVmZDI2OWNkMWEwYmVkNjk1OGU1MDI2MDUwODM2YmRjOWFiZGZhNTJhMmZiNjQ1YjM3YWI4YzFjZmExZXKHK6Y=: 01:20:20.424 05:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:20:20.424 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:20:20.424 05:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:20:20.424 05:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:20.424 05:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:20.424 05:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:20.424 05:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 01:20:20.424 05:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:20:20.425 05:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 01:20:20.425 05:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 01:20:20.425 05:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 01:20:20.425 05:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:20:20.425 05:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 01:20:20.425 05:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 01:20:20.425 05:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 01:20:20.425 05:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:20:20.425 05:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:20:20.425 05:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:20.425 05:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:20.425 05:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:20.425 05:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:20:20.425 05:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:20:20.425 05:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:20:20.992 01:20:20.992 05:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:20:20.992 05:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:20:20.992 05:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:20:21.252 05:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:20:21.252 05:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:20:21.252 05:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:21.252 05:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:21.252 05:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:21.252 05:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:20:21.252 { 01:20:21.252 "cntlid": 33, 01:20:21.252 "qid": 0, 01:20:21.252 "state": "enabled", 01:20:21.252 "thread": "nvmf_tgt_poll_group_000", 01:20:21.252 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b", 01:20:21.252 "listen_address": { 01:20:21.252 "trtype": "TCP", 01:20:21.252 "adrfam": "IPv4", 01:20:21.252 "traddr": "10.0.0.3", 01:20:21.252 "trsvcid": "4420" 01:20:21.252 }, 01:20:21.252 "peer_address": { 01:20:21.252 "trtype": "TCP", 01:20:21.252 "adrfam": "IPv4", 01:20:21.252 "traddr": "10.0.0.1", 01:20:21.252 "trsvcid": "38356" 01:20:21.252 }, 01:20:21.252 "auth": { 01:20:21.252 "state": "completed", 01:20:21.252 "digest": "sha256", 01:20:21.252 "dhgroup": "ffdhe6144" 01:20:21.252 } 01:20:21.252 } 01:20:21.252 ]' 01:20:21.252 05:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:20:21.252 05:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:20:21.252 05:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:20:21.252 05:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 01:20:21.252 05:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:20:21.252 05:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:20:21.252 05:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:20:21.252 05:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:20:21.512 05:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzhmN2E2YWFmNzdlZDZjYjg3ODQyZjU4Y2I3NjM2OTNiMzFiMzhjMTQyZmNjZGYxzNn33Q==: --dhchap-ctrl-secret DHHC-1:03:YjVlM2M3ZmUyMjRlM2Q3ZGYxZDljNWQ2YzFlOGI5Yzg1NDJjOTA5NmQ4YTQyMmVhZjRiMTBlNmQyODVmOTg0OT7Mb6E=: 01:20:21.512 05:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid 0567fff2-ddaf-4a4f-877d-a2600d7e662b -l 0 --dhchap-secret DHHC-1:00:NzhmN2E2YWFmNzdlZDZjYjg3ODQyZjU4Y2I3NjM2OTNiMzFiMzhjMTQyZmNjZGYxzNn33Q==: --dhchap-ctrl-secret DHHC-1:03:YjVlM2M3ZmUyMjRlM2Q3ZGYxZDljNWQ2YzFlOGI5Yzg1NDJjOTA5NmQ4YTQyMmVhZjRiMTBlNmQyODVmOTg0OT7Mb6E=: 01:20:22.079 05:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:20:22.079 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:20:22.079 05:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:20:22.079 05:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:22.079 05:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:22.079 05:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:22.079 05:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:20:22.079 05:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 01:20:22.079 05:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 01:20:22.338 05:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 01:20:22.338 05:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:20:22.338 05:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 01:20:22.338 05:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 01:20:22.338 05:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 01:20:22.338 05:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:20:22.338 05:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:20:22.338 05:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:22.338 05:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:22.338 05:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:22.338 05:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:20:22.338 05:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:20:22.338 05:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:20:22.597 01:20:22.597 05:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:20:22.597 05:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:20:22.597 05:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:20:22.855 05:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:20:22.855 05:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:20:22.855 05:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:22.855 05:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:22.855 05:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:22.855 05:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:20:22.855 { 01:20:22.855 "cntlid": 35, 01:20:22.855 "qid": 0, 01:20:22.855 "state": "enabled", 01:20:22.855 "thread": "nvmf_tgt_poll_group_000", 01:20:22.855 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b", 01:20:22.855 "listen_address": { 01:20:22.855 "trtype": "TCP", 01:20:22.855 "adrfam": "IPv4", 01:20:22.855 "traddr": "10.0.0.3", 01:20:22.855 "trsvcid": "4420" 01:20:22.855 }, 01:20:22.855 "peer_address": { 01:20:22.855 "trtype": "TCP", 01:20:22.855 "adrfam": "IPv4", 01:20:22.855 "traddr": "10.0.0.1", 01:20:22.855 "trsvcid": "38376" 01:20:22.855 }, 01:20:22.855 "auth": { 01:20:22.855 "state": "completed", 01:20:22.855 "digest": "sha256", 01:20:22.855 "dhgroup": "ffdhe6144" 01:20:22.855 } 01:20:22.855 } 01:20:22.855 ]' 01:20:22.855 05:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:20:22.855 05:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:20:22.855 05:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:20:23.113 05:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 01:20:23.113 05:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:20:23.113 05:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:20:23.113 05:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:20:23.113 05:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:20:23.372 05:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjJiNGM5NTJjMTgzMWFlZWIxOWEwZmE0MWI2ZjYzYWFcYrja: --dhchap-ctrl-secret DHHC-1:02:MzJhMjI4Zjk0NmU0ZTdiZDk4MWQzYjdlZDg2ZGNmZDUxMDhmYTM4YTkzMmY1ZWJi1Ai+GA==: 01:20:23.372 05:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid 0567fff2-ddaf-4a4f-877d-a2600d7e662b -l 0 --dhchap-secret DHHC-1:01:NjJiNGM5NTJjMTgzMWFlZWIxOWEwZmE0MWI2ZjYzYWFcYrja: --dhchap-ctrl-secret DHHC-1:02:MzJhMjI4Zjk0NmU0ZTdiZDk4MWQzYjdlZDg2ZGNmZDUxMDhmYTM4YTkzMmY1ZWJi1Ai+GA==: 01:20:23.945 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:20:23.945 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:20:23.945 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:20:23.945 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:23.945 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:23.945 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:23.945 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:20:23.945 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 01:20:23.945 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 01:20:23.945 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 01:20:23.945 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:20:23.945 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 01:20:23.945 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 01:20:23.945 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 01:20:23.945 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:20:23.945 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:20:23.945 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:23.946 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:23.946 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:23.946 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:20:23.946 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:20:23.946 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:20:24.512 01:20:24.512 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:20:24.512 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:20:24.512 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:20:24.771 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:20:24.771 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:20:24.771 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:24.771 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:24.771 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:24.771 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:20:24.771 { 01:20:24.771 "cntlid": 37, 01:20:24.771 "qid": 0, 01:20:24.771 "state": "enabled", 01:20:24.771 "thread": "nvmf_tgt_poll_group_000", 01:20:24.771 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b", 01:20:24.771 "listen_address": { 01:20:24.771 "trtype": "TCP", 01:20:24.771 "adrfam": "IPv4", 01:20:24.771 "traddr": "10.0.0.3", 01:20:24.771 "trsvcid": "4420" 01:20:24.771 }, 01:20:24.771 "peer_address": { 01:20:24.771 "trtype": "TCP", 01:20:24.771 "adrfam": "IPv4", 01:20:24.771 "traddr": "10.0.0.1", 01:20:24.771 "trsvcid": "38404" 01:20:24.771 }, 01:20:24.771 "auth": { 01:20:24.771 "state": "completed", 01:20:24.771 "digest": "sha256", 01:20:24.771 "dhgroup": "ffdhe6144" 01:20:24.771 } 01:20:24.771 } 01:20:24.771 ]' 01:20:24.771 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:20:24.771 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:20:24.771 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:20:24.771 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 01:20:24.771 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:20:24.771 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:20:24.771 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:20:24.771 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:20:25.029 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzdjNGMxMDA2YjgxNjkwM2EwMDA4OTE1MTNmODhkOTRlODJhNGRjYWIzYWI3YmM5CzI+UQ==: --dhchap-ctrl-secret DHHC-1:01:MjZhMDE4N2E2Y2M1ZmIwMjE0OTNjNDkyMGZmOTI1MWFTSBIt: 01:20:25.029 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid 0567fff2-ddaf-4a4f-877d-a2600d7e662b -l 0 --dhchap-secret DHHC-1:02:MzdjNGMxMDA2YjgxNjkwM2EwMDA4OTE1MTNmODhkOTRlODJhNGRjYWIzYWI3YmM5CzI+UQ==: --dhchap-ctrl-secret DHHC-1:01:MjZhMDE4N2E2Y2M1ZmIwMjE0OTNjNDkyMGZmOTI1MWFTSBIt: 01:20:25.594 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:20:25.594 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:20:25.594 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:20:25.594 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:25.594 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:25.594 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:25.594 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:20:25.594 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 01:20:25.594 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 01:20:25.851 05:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 01:20:25.851 05:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:20:25.851 05:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 01:20:25.851 05:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 01:20:25.851 05:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 01:20:25.851 05:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:20:25.851 05:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key3 01:20:25.851 05:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:25.851 05:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:25.851 05:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:25.851 05:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 01:20:25.851 05:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:20:25.851 05:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:20:26.109 01:20:26.109 05:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:20:26.109 05:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:20:26.109 05:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:20:26.384 05:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:20:26.384 05:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:20:26.384 05:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:26.384 05:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:26.384 05:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:26.384 05:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:20:26.384 { 01:20:26.384 "cntlid": 39, 01:20:26.384 "qid": 0, 01:20:26.384 "state": "enabled", 01:20:26.384 "thread": "nvmf_tgt_poll_group_000", 01:20:26.384 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b", 01:20:26.384 "listen_address": { 01:20:26.384 "trtype": "TCP", 01:20:26.384 "adrfam": "IPv4", 01:20:26.384 "traddr": "10.0.0.3", 01:20:26.384 "trsvcid": "4420" 01:20:26.384 }, 01:20:26.384 "peer_address": { 01:20:26.384 "trtype": "TCP", 01:20:26.384 "adrfam": "IPv4", 01:20:26.384 "traddr": "10.0.0.1", 01:20:26.384 "trsvcid": "38442" 01:20:26.384 }, 01:20:26.384 "auth": { 01:20:26.384 "state": "completed", 01:20:26.384 "digest": "sha256", 01:20:26.384 "dhgroup": "ffdhe6144" 01:20:26.384 } 01:20:26.384 } 01:20:26.384 ]' 01:20:26.384 05:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:20:26.384 05:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:20:26.384 05:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:20:26.642 05:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 01:20:26.642 05:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:20:26.642 05:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:20:26.642 05:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:20:26.642 05:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:20:26.642 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzlmMWVmZDI2OWNkMWEwYmVkNjk1OGU1MDI2MDUwODM2YmRjOWFiZGZhNTJhMmZiNjQ1YjM3YWI4YzFjZmExZXKHK6Y=: 01:20:26.642 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid 0567fff2-ddaf-4a4f-877d-a2600d7e662b -l 0 --dhchap-secret DHHC-1:03:NzlmMWVmZDI2OWNkMWEwYmVkNjk1OGU1MDI2MDUwODM2YmRjOWFiZGZhNTJhMmZiNjQ1YjM3YWI4YzFjZmExZXKHK6Y=: 01:20:27.209 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:20:27.468 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:20:27.468 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:20:27.468 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:27.468 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:27.468 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:27.468 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 01:20:27.468 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:20:27.468 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 01:20:27.468 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 01:20:27.468 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 01:20:27.468 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:20:27.468 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 01:20:27.468 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 01:20:27.468 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 01:20:27.468 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:20:27.468 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:20:27.468 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:27.468 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:27.468 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:27.468 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:20:27.468 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:20:27.468 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:20:28.035 01:20:28.293 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:20:28.293 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:20:28.293 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:20:28.293 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:20:28.293 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:20:28.293 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:28.294 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:28.294 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:28.294 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:20:28.294 { 01:20:28.294 "cntlid": 41, 01:20:28.294 "qid": 0, 01:20:28.294 "state": "enabled", 01:20:28.294 "thread": "nvmf_tgt_poll_group_000", 01:20:28.294 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b", 01:20:28.294 "listen_address": { 01:20:28.294 "trtype": "TCP", 01:20:28.294 "adrfam": "IPv4", 01:20:28.294 "traddr": "10.0.0.3", 01:20:28.294 "trsvcid": "4420" 01:20:28.294 }, 01:20:28.294 "peer_address": { 01:20:28.294 "trtype": "TCP", 01:20:28.294 "adrfam": "IPv4", 01:20:28.294 "traddr": "10.0.0.1", 01:20:28.294 "trsvcid": "38460" 01:20:28.294 }, 01:20:28.294 "auth": { 01:20:28.294 "state": "completed", 01:20:28.294 "digest": "sha256", 01:20:28.294 "dhgroup": "ffdhe8192" 01:20:28.294 } 01:20:28.294 } 01:20:28.294 ]' 01:20:28.294 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:20:28.553 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:20:28.553 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:20:28.553 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 01:20:28.553 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:20:28.553 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:20:28.553 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:20:28.553 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:20:28.812 05:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzhmN2E2YWFmNzdlZDZjYjg3ODQyZjU4Y2I3NjM2OTNiMzFiMzhjMTQyZmNjZGYxzNn33Q==: --dhchap-ctrl-secret DHHC-1:03:YjVlM2M3ZmUyMjRlM2Q3ZGYxZDljNWQ2YzFlOGI5Yzg1NDJjOTA5NmQ4YTQyMmVhZjRiMTBlNmQyODVmOTg0OT7Mb6E=: 01:20:28.813 05:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid 0567fff2-ddaf-4a4f-877d-a2600d7e662b -l 0 --dhchap-secret DHHC-1:00:NzhmN2E2YWFmNzdlZDZjYjg3ODQyZjU4Y2I3NjM2OTNiMzFiMzhjMTQyZmNjZGYxzNn33Q==: --dhchap-ctrl-secret DHHC-1:03:YjVlM2M3ZmUyMjRlM2Q3ZGYxZDljNWQ2YzFlOGI5Yzg1NDJjOTA5NmQ4YTQyMmVhZjRiMTBlNmQyODVmOTg0OT7Mb6E=: 01:20:29.382 05:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:20:29.382 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:20:29.382 05:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:20:29.382 05:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:29.382 05:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:29.383 05:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:29.383 05:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:20:29.383 05:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 01:20:29.383 05:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 01:20:29.383 05:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 01:20:29.383 05:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:20:29.383 05:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 01:20:29.383 05:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 01:20:29.383 05:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 01:20:29.383 05:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:20:29.383 05:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:20:29.383 05:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:29.383 05:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:29.383 05:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:29.383 05:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:20:29.383 05:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:20:29.383 05:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:20:29.952 01:20:29.952 05:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:20:29.952 05:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:20:29.952 05:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:20:30.211 05:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:20:30.211 05:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:20:30.211 05:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:30.211 05:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:30.211 05:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:30.211 05:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:20:30.211 { 01:20:30.211 "cntlid": 43, 01:20:30.211 "qid": 0, 01:20:30.211 "state": "enabled", 01:20:30.211 "thread": "nvmf_tgt_poll_group_000", 01:20:30.211 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b", 01:20:30.211 "listen_address": { 01:20:30.211 "trtype": "TCP", 01:20:30.211 "adrfam": "IPv4", 01:20:30.211 "traddr": "10.0.0.3", 01:20:30.211 "trsvcid": "4420" 01:20:30.211 }, 01:20:30.212 "peer_address": { 01:20:30.212 "trtype": "TCP", 01:20:30.212 "adrfam": "IPv4", 01:20:30.212 "traddr": "10.0.0.1", 01:20:30.212 "trsvcid": "38484" 01:20:30.212 }, 01:20:30.212 "auth": { 01:20:30.212 "state": "completed", 01:20:30.212 "digest": "sha256", 01:20:30.212 "dhgroup": "ffdhe8192" 01:20:30.212 } 01:20:30.212 } 01:20:30.212 ]' 01:20:30.212 05:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:20:30.212 05:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:20:30.212 05:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:20:30.212 05:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 01:20:30.212 05:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:20:30.470 05:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:20:30.470 05:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:20:30.470 05:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:20:30.730 05:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjJiNGM5NTJjMTgzMWFlZWIxOWEwZmE0MWI2ZjYzYWFcYrja: --dhchap-ctrl-secret DHHC-1:02:MzJhMjI4Zjk0NmU0ZTdiZDk4MWQzYjdlZDg2ZGNmZDUxMDhmYTM4YTkzMmY1ZWJi1Ai+GA==: 01:20:30.730 05:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid 0567fff2-ddaf-4a4f-877d-a2600d7e662b -l 0 --dhchap-secret DHHC-1:01:NjJiNGM5NTJjMTgzMWFlZWIxOWEwZmE0MWI2ZjYzYWFcYrja: --dhchap-ctrl-secret DHHC-1:02:MzJhMjI4Zjk0NmU0ZTdiZDk4MWQzYjdlZDg2ZGNmZDUxMDhmYTM4YTkzMmY1ZWJi1Ai+GA==: 01:20:31.300 05:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:20:31.300 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:20:31.300 05:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:20:31.300 05:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:31.300 05:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:31.300 05:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:31.300 05:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:20:31.300 05:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 01:20:31.300 05:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 01:20:31.300 05:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 01:20:31.300 05:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:20:31.300 05:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 01:20:31.300 05:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 01:20:31.300 05:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 01:20:31.300 05:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:20:31.300 05:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:20:31.300 05:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:31.300 05:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:31.559 05:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:31.559 05:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:20:31.559 05:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:20:31.559 05:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:20:32.128 01:20:32.128 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:20:32.128 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:20:32.128 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:20:32.128 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:20:32.128 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:20:32.128 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:32.128 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:32.387 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:32.387 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:20:32.387 { 01:20:32.387 "cntlid": 45, 01:20:32.387 "qid": 0, 01:20:32.387 "state": "enabled", 01:20:32.387 "thread": "nvmf_tgt_poll_group_000", 01:20:32.387 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b", 01:20:32.387 "listen_address": { 01:20:32.387 "trtype": "TCP", 01:20:32.387 "adrfam": "IPv4", 01:20:32.387 "traddr": "10.0.0.3", 01:20:32.387 "trsvcid": "4420" 01:20:32.387 }, 01:20:32.387 "peer_address": { 01:20:32.387 "trtype": "TCP", 01:20:32.387 "adrfam": "IPv4", 01:20:32.387 "traddr": "10.0.0.1", 01:20:32.387 "trsvcid": "45598" 01:20:32.387 }, 01:20:32.387 "auth": { 01:20:32.387 "state": "completed", 01:20:32.387 "digest": "sha256", 01:20:32.387 "dhgroup": "ffdhe8192" 01:20:32.387 } 01:20:32.387 } 01:20:32.387 ]' 01:20:32.387 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:20:32.387 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:20:32.387 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:20:32.387 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 01:20:32.387 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:20:32.387 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:20:32.387 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:20:32.387 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:20:32.645 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzdjNGMxMDA2YjgxNjkwM2EwMDA4OTE1MTNmODhkOTRlODJhNGRjYWIzYWI3YmM5CzI+UQ==: --dhchap-ctrl-secret DHHC-1:01:MjZhMDE4N2E2Y2M1ZmIwMjE0OTNjNDkyMGZmOTI1MWFTSBIt: 01:20:32.645 05:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid 0567fff2-ddaf-4a4f-877d-a2600d7e662b -l 0 --dhchap-secret DHHC-1:02:MzdjNGMxMDA2YjgxNjkwM2EwMDA4OTE1MTNmODhkOTRlODJhNGRjYWIzYWI3YmM5CzI+UQ==: --dhchap-ctrl-secret DHHC-1:01:MjZhMDE4N2E2Y2M1ZmIwMjE0OTNjNDkyMGZmOTI1MWFTSBIt: 01:20:33.216 05:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:20:33.216 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:20:33.216 05:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:20:33.216 05:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:33.216 05:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:33.216 05:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:33.216 05:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:20:33.216 05:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 01:20:33.216 05:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 01:20:33.475 05:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 01:20:33.476 05:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:20:33.476 05:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 01:20:33.476 05:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 01:20:33.476 05:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 01:20:33.476 05:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:20:33.476 05:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key3 01:20:33.476 05:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:33.476 05:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:33.476 05:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:33.476 05:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 01:20:33.476 05:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:20:33.476 05:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:20:34.043 01:20:34.043 05:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:20:34.043 05:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:20:34.043 05:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:20:34.305 05:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:20:34.305 05:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:20:34.305 05:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:34.305 05:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:34.305 05:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:34.305 05:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:20:34.305 { 01:20:34.305 "cntlid": 47, 01:20:34.305 "qid": 0, 01:20:34.305 "state": "enabled", 01:20:34.305 "thread": "nvmf_tgt_poll_group_000", 01:20:34.305 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b", 01:20:34.305 "listen_address": { 01:20:34.305 "trtype": "TCP", 01:20:34.306 "adrfam": "IPv4", 01:20:34.306 "traddr": "10.0.0.3", 01:20:34.306 "trsvcid": "4420" 01:20:34.306 }, 01:20:34.306 "peer_address": { 01:20:34.306 "trtype": "TCP", 01:20:34.306 "adrfam": "IPv4", 01:20:34.306 "traddr": "10.0.0.1", 01:20:34.306 "trsvcid": "45616" 01:20:34.306 }, 01:20:34.306 "auth": { 01:20:34.306 "state": "completed", 01:20:34.306 "digest": "sha256", 01:20:34.306 "dhgroup": "ffdhe8192" 01:20:34.306 } 01:20:34.306 } 01:20:34.306 ]' 01:20:34.306 05:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:20:34.306 05:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 01:20:34.306 05:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:20:34.306 05:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 01:20:34.306 05:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:20:34.306 05:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:20:34.306 05:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:20:34.306 05:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:20:34.585 05:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzlmMWVmZDI2OWNkMWEwYmVkNjk1OGU1MDI2MDUwODM2YmRjOWFiZGZhNTJhMmZiNjQ1YjM3YWI4YzFjZmExZXKHK6Y=: 01:20:34.585 05:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid 0567fff2-ddaf-4a4f-877d-a2600d7e662b -l 0 --dhchap-secret DHHC-1:03:NzlmMWVmZDI2OWNkMWEwYmVkNjk1OGU1MDI2MDUwODM2YmRjOWFiZGZhNTJhMmZiNjQ1YjM3YWI4YzFjZmExZXKHK6Y=: 01:20:35.151 05:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:20:35.151 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:20:35.151 05:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:20:35.151 05:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:35.151 05:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:35.151 05:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:35.151 05:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 01:20:35.151 05:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 01:20:35.151 05:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:20:35.151 05:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 01:20:35.151 05:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 01:20:35.409 05:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 01:20:35.409 05:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:20:35.409 05:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 01:20:35.409 05:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 01:20:35.409 05:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 01:20:35.409 05:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:20:35.409 05:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:20:35.409 05:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:35.409 05:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:35.409 05:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:35.409 05:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:20:35.409 05:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:20:35.409 05:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:20:35.668 01:20:35.668 05:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:20:35.668 05:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:20:35.668 05:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:20:35.980 05:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:20:35.980 05:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:20:35.980 05:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:35.980 05:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:35.980 05:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:35.980 05:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:20:35.980 { 01:20:35.980 "cntlid": 49, 01:20:35.980 "qid": 0, 01:20:35.980 "state": "enabled", 01:20:35.980 "thread": "nvmf_tgt_poll_group_000", 01:20:35.980 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b", 01:20:35.980 "listen_address": { 01:20:35.980 "trtype": "TCP", 01:20:35.980 "adrfam": "IPv4", 01:20:35.980 "traddr": "10.0.0.3", 01:20:35.980 "trsvcid": "4420" 01:20:35.980 }, 01:20:35.980 "peer_address": { 01:20:35.980 "trtype": "TCP", 01:20:35.980 "adrfam": "IPv4", 01:20:35.980 "traddr": "10.0.0.1", 01:20:35.980 "trsvcid": "45628" 01:20:35.980 }, 01:20:35.980 "auth": { 01:20:35.980 "state": "completed", 01:20:35.981 "digest": "sha384", 01:20:35.981 "dhgroup": "null" 01:20:35.981 } 01:20:35.981 } 01:20:35.981 ]' 01:20:35.981 05:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:20:35.981 05:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:20:35.981 05:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:20:35.981 05:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 01:20:35.981 05:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:20:35.981 05:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:20:35.981 05:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:20:35.981 05:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:20:36.238 05:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzhmN2E2YWFmNzdlZDZjYjg3ODQyZjU4Y2I3NjM2OTNiMzFiMzhjMTQyZmNjZGYxzNn33Q==: --dhchap-ctrl-secret DHHC-1:03:YjVlM2M3ZmUyMjRlM2Q3ZGYxZDljNWQ2YzFlOGI5Yzg1NDJjOTA5NmQ4YTQyMmVhZjRiMTBlNmQyODVmOTg0OT7Mb6E=: 01:20:36.239 05:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid 0567fff2-ddaf-4a4f-877d-a2600d7e662b -l 0 --dhchap-secret DHHC-1:00:NzhmN2E2YWFmNzdlZDZjYjg3ODQyZjU4Y2I3NjM2OTNiMzFiMzhjMTQyZmNjZGYxzNn33Q==: --dhchap-ctrl-secret DHHC-1:03:YjVlM2M3ZmUyMjRlM2Q3ZGYxZDljNWQ2YzFlOGI5Yzg1NDJjOTA5NmQ4YTQyMmVhZjRiMTBlNmQyODVmOTg0OT7Mb6E=: 01:20:36.806 05:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:20:36.806 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:20:36.806 05:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:20:36.806 05:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:36.806 05:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:36.806 05:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:36.806 05:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:20:36.806 05:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 01:20:36.806 05:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 01:20:37.065 05:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 01:20:37.065 05:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:20:37.065 05:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 01:20:37.065 05:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 01:20:37.065 05:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 01:20:37.065 05:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:20:37.065 05:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:20:37.065 05:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:37.065 05:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:37.065 05:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:37.065 05:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:20:37.065 05:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:20:37.065 05:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:20:37.325 01:20:37.325 05:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:20:37.325 05:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:20:37.325 05:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:20:37.325 05:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:20:37.325 05:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:20:37.325 05:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:37.325 05:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:37.325 05:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:37.325 05:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:20:37.325 { 01:20:37.325 "cntlid": 51, 01:20:37.325 "qid": 0, 01:20:37.325 "state": "enabled", 01:20:37.325 "thread": "nvmf_tgt_poll_group_000", 01:20:37.325 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b", 01:20:37.325 "listen_address": { 01:20:37.325 "trtype": "TCP", 01:20:37.325 "adrfam": "IPv4", 01:20:37.325 "traddr": "10.0.0.3", 01:20:37.325 "trsvcid": "4420" 01:20:37.325 }, 01:20:37.325 "peer_address": { 01:20:37.325 "trtype": "TCP", 01:20:37.325 "adrfam": "IPv4", 01:20:37.325 "traddr": "10.0.0.1", 01:20:37.325 "trsvcid": "45636" 01:20:37.325 }, 01:20:37.325 "auth": { 01:20:37.325 "state": "completed", 01:20:37.325 "digest": "sha384", 01:20:37.325 "dhgroup": "null" 01:20:37.325 } 01:20:37.325 } 01:20:37.325 ]' 01:20:37.585 05:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:20:37.585 05:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:20:37.585 05:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:20:37.585 05:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 01:20:37.585 05:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:20:37.585 05:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:20:37.585 05:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:20:37.585 05:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:20:37.844 05:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjJiNGM5NTJjMTgzMWFlZWIxOWEwZmE0MWI2ZjYzYWFcYrja: --dhchap-ctrl-secret DHHC-1:02:MzJhMjI4Zjk0NmU0ZTdiZDk4MWQzYjdlZDg2ZGNmZDUxMDhmYTM4YTkzMmY1ZWJi1Ai+GA==: 01:20:37.844 05:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid 0567fff2-ddaf-4a4f-877d-a2600d7e662b -l 0 --dhchap-secret DHHC-1:01:NjJiNGM5NTJjMTgzMWFlZWIxOWEwZmE0MWI2ZjYzYWFcYrja: --dhchap-ctrl-secret DHHC-1:02:MzJhMjI4Zjk0NmU0ZTdiZDk4MWQzYjdlZDg2ZGNmZDUxMDhmYTM4YTkzMmY1ZWJi1Ai+GA==: 01:20:38.420 05:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:20:38.420 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:20:38.420 05:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:20:38.420 05:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:38.420 05:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:38.420 05:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:38.420 05:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:20:38.420 05:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 01:20:38.420 05:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 01:20:38.681 05:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 01:20:38.681 05:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:20:38.681 05:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 01:20:38.681 05:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 01:20:38.681 05:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 01:20:38.681 05:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:20:38.681 05:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:20:38.681 05:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:38.681 05:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:38.681 05:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:38.681 05:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:20:38.681 05:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:20:38.681 05:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:20:38.681 01:20:38.941 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:20:38.941 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:20:38.941 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:20:38.941 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:20:38.941 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:20:38.941 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:38.941 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:38.941 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:38.941 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:20:38.941 { 01:20:38.941 "cntlid": 53, 01:20:38.941 "qid": 0, 01:20:38.941 "state": "enabled", 01:20:38.941 "thread": "nvmf_tgt_poll_group_000", 01:20:38.941 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b", 01:20:38.941 "listen_address": { 01:20:38.941 "trtype": "TCP", 01:20:38.941 "adrfam": "IPv4", 01:20:38.941 "traddr": "10.0.0.3", 01:20:38.941 "trsvcid": "4420" 01:20:38.941 }, 01:20:38.941 "peer_address": { 01:20:38.941 "trtype": "TCP", 01:20:38.941 "adrfam": "IPv4", 01:20:38.941 "traddr": "10.0.0.1", 01:20:38.941 "trsvcid": "45672" 01:20:38.941 }, 01:20:38.941 "auth": { 01:20:38.941 "state": "completed", 01:20:38.941 "digest": "sha384", 01:20:38.941 "dhgroup": "null" 01:20:38.941 } 01:20:38.941 } 01:20:38.941 ]' 01:20:39.201 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:20:39.201 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:20:39.201 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:20:39.201 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 01:20:39.201 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:20:39.201 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:20:39.201 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:20:39.201 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:20:39.461 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzdjNGMxMDA2YjgxNjkwM2EwMDA4OTE1MTNmODhkOTRlODJhNGRjYWIzYWI3YmM5CzI+UQ==: --dhchap-ctrl-secret DHHC-1:01:MjZhMDE4N2E2Y2M1ZmIwMjE0OTNjNDkyMGZmOTI1MWFTSBIt: 01:20:39.461 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid 0567fff2-ddaf-4a4f-877d-a2600d7e662b -l 0 --dhchap-secret DHHC-1:02:MzdjNGMxMDA2YjgxNjkwM2EwMDA4OTE1MTNmODhkOTRlODJhNGRjYWIzYWI3YmM5CzI+UQ==: --dhchap-ctrl-secret DHHC-1:01:MjZhMDE4N2E2Y2M1ZmIwMjE0OTNjNDkyMGZmOTI1MWFTSBIt: 01:20:40.030 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:20:40.030 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:20:40.030 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:20:40.030 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:40.030 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:40.030 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:40.030 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:20:40.030 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 01:20:40.030 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 01:20:40.290 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 01:20:40.290 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:20:40.290 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 01:20:40.290 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 01:20:40.290 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 01:20:40.290 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:20:40.290 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key3 01:20:40.290 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:40.290 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:40.290 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:40.290 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 01:20:40.290 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:20:40.290 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:20:40.549 01:20:40.549 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:20:40.549 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:20:40.549 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:20:40.549 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:20:40.549 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:20:40.549 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:40.549 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:40.809 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:40.809 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:20:40.809 { 01:20:40.809 "cntlid": 55, 01:20:40.809 "qid": 0, 01:20:40.809 "state": "enabled", 01:20:40.809 "thread": "nvmf_tgt_poll_group_000", 01:20:40.809 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b", 01:20:40.809 "listen_address": { 01:20:40.809 "trtype": "TCP", 01:20:40.809 "adrfam": "IPv4", 01:20:40.809 "traddr": "10.0.0.3", 01:20:40.809 "trsvcid": "4420" 01:20:40.809 }, 01:20:40.809 "peer_address": { 01:20:40.809 "trtype": "TCP", 01:20:40.809 "adrfam": "IPv4", 01:20:40.809 "traddr": "10.0.0.1", 01:20:40.809 "trsvcid": "43132" 01:20:40.809 }, 01:20:40.809 "auth": { 01:20:40.809 "state": "completed", 01:20:40.809 "digest": "sha384", 01:20:40.809 "dhgroup": "null" 01:20:40.809 } 01:20:40.809 } 01:20:40.809 ]' 01:20:40.809 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:20:40.809 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:20:40.809 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:20:40.809 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 01:20:40.809 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:20:40.809 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:20:40.809 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:20:40.809 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:20:41.070 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzlmMWVmZDI2OWNkMWEwYmVkNjk1OGU1MDI2MDUwODM2YmRjOWFiZGZhNTJhMmZiNjQ1YjM3YWI4YzFjZmExZXKHK6Y=: 01:20:41.070 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid 0567fff2-ddaf-4a4f-877d-a2600d7e662b -l 0 --dhchap-secret DHHC-1:03:NzlmMWVmZDI2OWNkMWEwYmVkNjk1OGU1MDI2MDUwODM2YmRjOWFiZGZhNTJhMmZiNjQ1YjM3YWI4YzFjZmExZXKHK6Y=: 01:20:41.639 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:20:41.639 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:20:41.639 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:20:41.639 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:41.639 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:41.639 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:41.639 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 01:20:41.639 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:20:41.639 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 01:20:41.639 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 01:20:41.899 05:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 01:20:41.899 05:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:20:41.899 05:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 01:20:41.899 05:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 01:20:41.899 05:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 01:20:41.899 05:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:20:41.899 05:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:20:41.899 05:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:41.899 05:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:41.899 05:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:41.899 05:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:20:41.899 05:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:20:41.899 05:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:20:42.159 01:20:42.159 05:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:20:42.159 05:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:20:42.159 05:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:20:42.418 05:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:20:42.418 05:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:20:42.418 05:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:42.418 05:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:42.418 05:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:42.418 05:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:20:42.418 { 01:20:42.418 "cntlid": 57, 01:20:42.418 "qid": 0, 01:20:42.418 "state": "enabled", 01:20:42.418 "thread": "nvmf_tgt_poll_group_000", 01:20:42.418 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b", 01:20:42.418 "listen_address": { 01:20:42.418 "trtype": "TCP", 01:20:42.418 "adrfam": "IPv4", 01:20:42.418 "traddr": "10.0.0.3", 01:20:42.418 "trsvcid": "4420" 01:20:42.418 }, 01:20:42.418 "peer_address": { 01:20:42.418 "trtype": "TCP", 01:20:42.418 "adrfam": "IPv4", 01:20:42.418 "traddr": "10.0.0.1", 01:20:42.418 "trsvcid": "43166" 01:20:42.418 }, 01:20:42.418 "auth": { 01:20:42.418 "state": "completed", 01:20:42.418 "digest": "sha384", 01:20:42.418 "dhgroup": "ffdhe2048" 01:20:42.418 } 01:20:42.418 } 01:20:42.418 ]' 01:20:42.418 05:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:20:42.418 05:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:20:42.418 05:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:20:42.418 05:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 01:20:42.419 05:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:20:42.419 05:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:20:42.419 05:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:20:42.419 05:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:20:42.677 05:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzhmN2E2YWFmNzdlZDZjYjg3ODQyZjU4Y2I3NjM2OTNiMzFiMzhjMTQyZmNjZGYxzNn33Q==: --dhchap-ctrl-secret DHHC-1:03:YjVlM2M3ZmUyMjRlM2Q3ZGYxZDljNWQ2YzFlOGI5Yzg1NDJjOTA5NmQ4YTQyMmVhZjRiMTBlNmQyODVmOTg0OT7Mb6E=: 01:20:42.677 05:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid 0567fff2-ddaf-4a4f-877d-a2600d7e662b -l 0 --dhchap-secret DHHC-1:00:NzhmN2E2YWFmNzdlZDZjYjg3ODQyZjU4Y2I3NjM2OTNiMzFiMzhjMTQyZmNjZGYxzNn33Q==: --dhchap-ctrl-secret DHHC-1:03:YjVlM2M3ZmUyMjRlM2Q3ZGYxZDljNWQ2YzFlOGI5Yzg1NDJjOTA5NmQ4YTQyMmVhZjRiMTBlNmQyODVmOTg0OT7Mb6E=: 01:20:43.254 05:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:20:43.531 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:20:43.531 05:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:20:43.531 05:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:43.531 05:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:43.531 05:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:43.531 05:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:20:43.531 05:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 01:20:43.531 05:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 01:20:43.531 05:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 01:20:43.531 05:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:20:43.531 05:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 01:20:43.531 05:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 01:20:43.531 05:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 01:20:43.531 05:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:20:43.531 05:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:20:43.531 05:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:43.531 05:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:43.531 05:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:43.531 05:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:20:43.531 05:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:20:43.531 05:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:20:43.790 01:20:44.049 05:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:20:44.049 05:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:20:44.049 05:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:20:44.049 05:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:20:44.049 05:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:20:44.049 05:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:44.049 05:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:44.049 05:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:44.049 05:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:20:44.049 { 01:20:44.049 "cntlid": 59, 01:20:44.049 "qid": 0, 01:20:44.049 "state": "enabled", 01:20:44.049 "thread": "nvmf_tgt_poll_group_000", 01:20:44.049 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b", 01:20:44.049 "listen_address": { 01:20:44.049 "trtype": "TCP", 01:20:44.049 "adrfam": "IPv4", 01:20:44.049 "traddr": "10.0.0.3", 01:20:44.049 "trsvcid": "4420" 01:20:44.049 }, 01:20:44.049 "peer_address": { 01:20:44.049 "trtype": "TCP", 01:20:44.049 "adrfam": "IPv4", 01:20:44.049 "traddr": "10.0.0.1", 01:20:44.049 "trsvcid": "43196" 01:20:44.049 }, 01:20:44.049 "auth": { 01:20:44.049 "state": "completed", 01:20:44.049 "digest": "sha384", 01:20:44.049 "dhgroup": "ffdhe2048" 01:20:44.049 } 01:20:44.049 } 01:20:44.049 ]' 01:20:44.049 05:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:20:44.308 05:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:20:44.308 05:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:20:44.308 05:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 01:20:44.308 05:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:20:44.308 05:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:20:44.308 05:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:20:44.308 05:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:20:44.568 05:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjJiNGM5NTJjMTgzMWFlZWIxOWEwZmE0MWI2ZjYzYWFcYrja: --dhchap-ctrl-secret DHHC-1:02:MzJhMjI4Zjk0NmU0ZTdiZDk4MWQzYjdlZDg2ZGNmZDUxMDhmYTM4YTkzMmY1ZWJi1Ai+GA==: 01:20:44.568 05:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid 0567fff2-ddaf-4a4f-877d-a2600d7e662b -l 0 --dhchap-secret DHHC-1:01:NjJiNGM5NTJjMTgzMWFlZWIxOWEwZmE0MWI2ZjYzYWFcYrja: --dhchap-ctrl-secret DHHC-1:02:MzJhMjI4Zjk0NmU0ZTdiZDk4MWQzYjdlZDg2ZGNmZDUxMDhmYTM4YTkzMmY1ZWJi1Ai+GA==: 01:20:45.137 05:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:20:45.137 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:20:45.137 05:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:20:45.137 05:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:45.137 05:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:45.137 05:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:45.137 05:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:20:45.137 05:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 01:20:45.137 05:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 01:20:45.396 05:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 01:20:45.396 05:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:20:45.396 05:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 01:20:45.396 05:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 01:20:45.396 05:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 01:20:45.396 05:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:20:45.396 05:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:20:45.396 05:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:45.396 05:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:45.396 05:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:45.396 05:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:20:45.396 05:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:20:45.396 05:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:20:45.655 01:20:45.655 05:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:20:45.655 05:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:20:45.655 05:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:20:45.914 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:20:45.914 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:20:45.914 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:45.914 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:45.914 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:45.914 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:20:45.914 { 01:20:45.914 "cntlid": 61, 01:20:45.914 "qid": 0, 01:20:45.914 "state": "enabled", 01:20:45.914 "thread": "nvmf_tgt_poll_group_000", 01:20:45.914 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b", 01:20:45.914 "listen_address": { 01:20:45.914 "trtype": "TCP", 01:20:45.914 "adrfam": "IPv4", 01:20:45.914 "traddr": "10.0.0.3", 01:20:45.914 "trsvcid": "4420" 01:20:45.914 }, 01:20:45.914 "peer_address": { 01:20:45.914 "trtype": "TCP", 01:20:45.914 "adrfam": "IPv4", 01:20:45.914 "traddr": "10.0.0.1", 01:20:45.914 "trsvcid": "43224" 01:20:45.914 }, 01:20:45.914 "auth": { 01:20:45.914 "state": "completed", 01:20:45.914 "digest": "sha384", 01:20:45.914 "dhgroup": "ffdhe2048" 01:20:45.914 } 01:20:45.914 } 01:20:45.914 ]' 01:20:45.914 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:20:45.914 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:20:45.914 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:20:45.914 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 01:20:45.914 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:20:46.174 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:20:46.174 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:20:46.174 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:20:46.174 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzdjNGMxMDA2YjgxNjkwM2EwMDA4OTE1MTNmODhkOTRlODJhNGRjYWIzYWI3YmM5CzI+UQ==: --dhchap-ctrl-secret DHHC-1:01:MjZhMDE4N2E2Y2M1ZmIwMjE0OTNjNDkyMGZmOTI1MWFTSBIt: 01:20:46.174 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid 0567fff2-ddaf-4a4f-877d-a2600d7e662b -l 0 --dhchap-secret DHHC-1:02:MzdjNGMxMDA2YjgxNjkwM2EwMDA4OTE1MTNmODhkOTRlODJhNGRjYWIzYWI3YmM5CzI+UQ==: --dhchap-ctrl-secret DHHC-1:01:MjZhMDE4N2E2Y2M1ZmIwMjE0OTNjNDkyMGZmOTI1MWFTSBIt: 01:20:46.744 05:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:20:46.744 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:20:46.744 05:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:20:47.002 05:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:47.003 05:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:47.003 05:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:47.003 05:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:20:47.003 05:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 01:20:47.003 05:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 01:20:47.003 05:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 01:20:47.003 05:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:20:47.003 05:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 01:20:47.003 05:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 01:20:47.003 05:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 01:20:47.003 05:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:20:47.003 05:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key3 01:20:47.003 05:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:47.003 05:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:47.003 05:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:47.003 05:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 01:20:47.003 05:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:20:47.003 05:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:20:47.262 01:20:47.521 05:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:20:47.521 05:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:20:47.521 05:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:20:47.521 05:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:20:47.522 05:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:20:47.522 05:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:47.522 05:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:47.522 05:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:47.780 05:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:20:47.780 { 01:20:47.780 "cntlid": 63, 01:20:47.780 "qid": 0, 01:20:47.780 "state": "enabled", 01:20:47.780 "thread": "nvmf_tgt_poll_group_000", 01:20:47.780 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b", 01:20:47.780 "listen_address": { 01:20:47.780 "trtype": "TCP", 01:20:47.780 "adrfam": "IPv4", 01:20:47.780 "traddr": "10.0.0.3", 01:20:47.780 "trsvcid": "4420" 01:20:47.780 }, 01:20:47.780 "peer_address": { 01:20:47.780 "trtype": "TCP", 01:20:47.780 "adrfam": "IPv4", 01:20:47.780 "traddr": "10.0.0.1", 01:20:47.780 "trsvcid": "43254" 01:20:47.780 }, 01:20:47.780 "auth": { 01:20:47.780 "state": "completed", 01:20:47.780 "digest": "sha384", 01:20:47.780 "dhgroup": "ffdhe2048" 01:20:47.780 } 01:20:47.780 } 01:20:47.780 ]' 01:20:47.780 05:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:20:47.780 05:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:20:47.780 05:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:20:47.780 05:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 01:20:47.780 05:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:20:47.780 05:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:20:47.780 05:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:20:47.780 05:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:20:48.039 05:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzlmMWVmZDI2OWNkMWEwYmVkNjk1OGU1MDI2MDUwODM2YmRjOWFiZGZhNTJhMmZiNjQ1YjM3YWI4YzFjZmExZXKHK6Y=: 01:20:48.039 05:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid 0567fff2-ddaf-4a4f-877d-a2600d7e662b -l 0 --dhchap-secret DHHC-1:03:NzlmMWVmZDI2OWNkMWEwYmVkNjk1OGU1MDI2MDUwODM2YmRjOWFiZGZhNTJhMmZiNjQ1YjM3YWI4YzFjZmExZXKHK6Y=: 01:20:48.607 05:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:20:48.607 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:20:48.607 05:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:20:48.607 05:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:48.607 05:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:48.607 05:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:48.607 05:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 01:20:48.607 05:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:20:48.607 05:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 01:20:48.607 05:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 01:20:48.865 05:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 01:20:48.865 05:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:20:48.865 05:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 01:20:48.865 05:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 01:20:48.865 05:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 01:20:48.865 05:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:20:48.865 05:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:20:48.865 05:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:48.865 05:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:48.865 05:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:48.865 05:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:20:48.865 05:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:20:48.865 05:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:20:49.125 01:20:49.125 05:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:20:49.125 05:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:20:49.125 05:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:20:49.387 05:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:20:49.387 05:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:20:49.387 05:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:49.387 05:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:49.387 05:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:49.387 05:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:20:49.387 { 01:20:49.387 "cntlid": 65, 01:20:49.387 "qid": 0, 01:20:49.387 "state": "enabled", 01:20:49.387 "thread": "nvmf_tgt_poll_group_000", 01:20:49.387 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b", 01:20:49.387 "listen_address": { 01:20:49.387 "trtype": "TCP", 01:20:49.387 "adrfam": "IPv4", 01:20:49.387 "traddr": "10.0.0.3", 01:20:49.387 "trsvcid": "4420" 01:20:49.387 }, 01:20:49.387 "peer_address": { 01:20:49.387 "trtype": "TCP", 01:20:49.387 "adrfam": "IPv4", 01:20:49.387 "traddr": "10.0.0.1", 01:20:49.387 "trsvcid": "43288" 01:20:49.387 }, 01:20:49.387 "auth": { 01:20:49.387 "state": "completed", 01:20:49.387 "digest": "sha384", 01:20:49.387 "dhgroup": "ffdhe3072" 01:20:49.387 } 01:20:49.387 } 01:20:49.387 ]' 01:20:49.387 05:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:20:49.387 05:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:20:49.387 05:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:20:49.387 05:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 01:20:49.387 05:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:20:49.646 05:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:20:49.646 05:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:20:49.646 05:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:20:49.646 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzhmN2E2YWFmNzdlZDZjYjg3ODQyZjU4Y2I3NjM2OTNiMzFiMzhjMTQyZmNjZGYxzNn33Q==: --dhchap-ctrl-secret DHHC-1:03:YjVlM2M3ZmUyMjRlM2Q3ZGYxZDljNWQ2YzFlOGI5Yzg1NDJjOTA5NmQ4YTQyMmVhZjRiMTBlNmQyODVmOTg0OT7Mb6E=: 01:20:49.646 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid 0567fff2-ddaf-4a4f-877d-a2600d7e662b -l 0 --dhchap-secret DHHC-1:00:NzhmN2E2YWFmNzdlZDZjYjg3ODQyZjU4Y2I3NjM2OTNiMzFiMzhjMTQyZmNjZGYxzNn33Q==: --dhchap-ctrl-secret DHHC-1:03:YjVlM2M3ZmUyMjRlM2Q3ZGYxZDljNWQ2YzFlOGI5Yzg1NDJjOTA5NmQ4YTQyMmVhZjRiMTBlNmQyODVmOTg0OT7Mb6E=: 01:20:50.214 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:20:50.214 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:20:50.214 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:20:50.214 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:50.214 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:50.214 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:50.214 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:20:50.214 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 01:20:50.214 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 01:20:50.472 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 01:20:50.472 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:20:50.472 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 01:20:50.472 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 01:20:50.472 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 01:20:50.472 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:20:50.472 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:20:50.472 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:50.472 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:50.472 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:50.472 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:20:50.472 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:20:50.472 05:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:20:50.831 01:20:50.832 05:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:20:50.832 05:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:20:50.832 05:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:20:51.091 05:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:20:51.091 05:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:20:51.091 05:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:51.091 05:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:51.091 05:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:51.091 05:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:20:51.091 { 01:20:51.091 "cntlid": 67, 01:20:51.091 "qid": 0, 01:20:51.091 "state": "enabled", 01:20:51.091 "thread": "nvmf_tgt_poll_group_000", 01:20:51.091 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b", 01:20:51.091 "listen_address": { 01:20:51.091 "trtype": "TCP", 01:20:51.091 "adrfam": "IPv4", 01:20:51.091 "traddr": "10.0.0.3", 01:20:51.092 "trsvcid": "4420" 01:20:51.092 }, 01:20:51.092 "peer_address": { 01:20:51.092 "trtype": "TCP", 01:20:51.092 "adrfam": "IPv4", 01:20:51.092 "traddr": "10.0.0.1", 01:20:51.092 "trsvcid": "44872" 01:20:51.092 }, 01:20:51.092 "auth": { 01:20:51.092 "state": "completed", 01:20:51.092 "digest": "sha384", 01:20:51.092 "dhgroup": "ffdhe3072" 01:20:51.092 } 01:20:51.092 } 01:20:51.092 ]' 01:20:51.092 05:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:20:51.092 05:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:20:51.092 05:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:20:51.092 05:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 01:20:51.092 05:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:20:51.092 05:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:20:51.092 05:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:20:51.092 05:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:20:51.352 05:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjJiNGM5NTJjMTgzMWFlZWIxOWEwZmE0MWI2ZjYzYWFcYrja: --dhchap-ctrl-secret DHHC-1:02:MzJhMjI4Zjk0NmU0ZTdiZDk4MWQzYjdlZDg2ZGNmZDUxMDhmYTM4YTkzMmY1ZWJi1Ai+GA==: 01:20:51.352 05:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid 0567fff2-ddaf-4a4f-877d-a2600d7e662b -l 0 --dhchap-secret DHHC-1:01:NjJiNGM5NTJjMTgzMWFlZWIxOWEwZmE0MWI2ZjYzYWFcYrja: --dhchap-ctrl-secret DHHC-1:02:MzJhMjI4Zjk0NmU0ZTdiZDk4MWQzYjdlZDg2ZGNmZDUxMDhmYTM4YTkzMmY1ZWJi1Ai+GA==: 01:20:51.925 05:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:20:51.925 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:20:51.925 05:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:20:51.925 05:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:51.925 05:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:51.925 05:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:51.925 05:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:20:51.925 05:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 01:20:51.925 05:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 01:20:52.184 05:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 01:20:52.184 05:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:20:52.184 05:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 01:20:52.184 05:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 01:20:52.184 05:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 01:20:52.184 05:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:20:52.185 05:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:20:52.185 05:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:52.185 05:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:52.185 05:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:52.185 05:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:20:52.185 05:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:20:52.185 05:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:20:52.444 01:20:52.444 05:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:20:52.444 05:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:20:52.444 05:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:20:52.709 05:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:20:52.709 05:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:20:52.709 05:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:52.709 05:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:52.709 05:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:52.709 05:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:20:52.709 { 01:20:52.709 "cntlid": 69, 01:20:52.709 "qid": 0, 01:20:52.709 "state": "enabled", 01:20:52.709 "thread": "nvmf_tgt_poll_group_000", 01:20:52.709 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b", 01:20:52.709 "listen_address": { 01:20:52.709 "trtype": "TCP", 01:20:52.709 "adrfam": "IPv4", 01:20:52.709 "traddr": "10.0.0.3", 01:20:52.709 "trsvcid": "4420" 01:20:52.709 }, 01:20:52.709 "peer_address": { 01:20:52.709 "trtype": "TCP", 01:20:52.709 "adrfam": "IPv4", 01:20:52.709 "traddr": "10.0.0.1", 01:20:52.709 "trsvcid": "44900" 01:20:52.709 }, 01:20:52.709 "auth": { 01:20:52.709 "state": "completed", 01:20:52.709 "digest": "sha384", 01:20:52.709 "dhgroup": "ffdhe3072" 01:20:52.709 } 01:20:52.709 } 01:20:52.709 ]' 01:20:52.709 05:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:20:52.709 05:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:20:52.709 05:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:20:52.709 05:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 01:20:52.709 05:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:20:52.971 05:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:20:52.971 05:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:20:52.971 05:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:20:52.971 05:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzdjNGMxMDA2YjgxNjkwM2EwMDA4OTE1MTNmODhkOTRlODJhNGRjYWIzYWI3YmM5CzI+UQ==: --dhchap-ctrl-secret DHHC-1:01:MjZhMDE4N2E2Y2M1ZmIwMjE0OTNjNDkyMGZmOTI1MWFTSBIt: 01:20:52.971 05:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid 0567fff2-ddaf-4a4f-877d-a2600d7e662b -l 0 --dhchap-secret DHHC-1:02:MzdjNGMxMDA2YjgxNjkwM2EwMDA4OTE1MTNmODhkOTRlODJhNGRjYWIzYWI3YmM5CzI+UQ==: --dhchap-ctrl-secret DHHC-1:01:MjZhMDE4N2E2Y2M1ZmIwMjE0OTNjNDkyMGZmOTI1MWFTSBIt: 01:20:53.540 05:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:20:53.540 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:20:53.540 05:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:20:53.540 05:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:53.540 05:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:53.540 05:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:53.540 05:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:20:53.540 05:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 01:20:53.540 05:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 01:20:53.800 05:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 01:20:53.800 05:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:20:53.800 05:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 01:20:53.800 05:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 01:20:53.800 05:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 01:20:53.800 05:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:20:53.800 05:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key3 01:20:53.800 05:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:53.800 05:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:53.800 05:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:53.800 05:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 01:20:53.800 05:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:20:53.800 05:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:20:54.059 01:20:54.059 05:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:20:54.059 05:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:20:54.059 05:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:20:54.316 05:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:20:54.316 05:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:20:54.316 05:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:54.316 05:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:54.316 05:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:54.316 05:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:20:54.316 { 01:20:54.316 "cntlid": 71, 01:20:54.316 "qid": 0, 01:20:54.316 "state": "enabled", 01:20:54.316 "thread": "nvmf_tgt_poll_group_000", 01:20:54.316 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b", 01:20:54.316 "listen_address": { 01:20:54.316 "trtype": "TCP", 01:20:54.316 "adrfam": "IPv4", 01:20:54.316 "traddr": "10.0.0.3", 01:20:54.316 "trsvcid": "4420" 01:20:54.316 }, 01:20:54.316 "peer_address": { 01:20:54.316 "trtype": "TCP", 01:20:54.316 "adrfam": "IPv4", 01:20:54.316 "traddr": "10.0.0.1", 01:20:54.316 "trsvcid": "44914" 01:20:54.316 }, 01:20:54.316 "auth": { 01:20:54.316 "state": "completed", 01:20:54.316 "digest": "sha384", 01:20:54.316 "dhgroup": "ffdhe3072" 01:20:54.316 } 01:20:54.316 } 01:20:54.316 ]' 01:20:54.316 05:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:20:54.316 05:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:20:54.316 05:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:20:54.574 05:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 01:20:54.574 05:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:20:54.574 05:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:20:54.574 05:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:20:54.574 05:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:20:54.833 05:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzlmMWVmZDI2OWNkMWEwYmVkNjk1OGU1MDI2MDUwODM2YmRjOWFiZGZhNTJhMmZiNjQ1YjM3YWI4YzFjZmExZXKHK6Y=: 01:20:54.833 05:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid 0567fff2-ddaf-4a4f-877d-a2600d7e662b -l 0 --dhchap-secret DHHC-1:03:NzlmMWVmZDI2OWNkMWEwYmVkNjk1OGU1MDI2MDUwODM2YmRjOWFiZGZhNTJhMmZiNjQ1YjM3YWI4YzFjZmExZXKHK6Y=: 01:20:55.402 05:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:20:55.402 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:20:55.402 05:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:20:55.402 05:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:55.402 05:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:55.402 05:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:55.402 05:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 01:20:55.402 05:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:20:55.402 05:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:20:55.402 05:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:20:55.402 05:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 01:20:55.402 05:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:20:55.402 05:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 01:20:55.402 05:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 01:20:55.402 05:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 01:20:55.402 05:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:20:55.402 05:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:20:55.402 05:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:55.402 05:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:55.402 05:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:55.402 05:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:20:55.402 05:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:20:55.402 05:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:20:55.970 01:20:55.970 05:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:20:55.970 05:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:20:55.970 05:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:20:55.970 05:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:20:55.970 05:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:20:55.970 05:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:55.970 05:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:55.970 05:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:55.970 05:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:20:55.970 { 01:20:55.970 "cntlid": 73, 01:20:55.970 "qid": 0, 01:20:55.970 "state": "enabled", 01:20:55.970 "thread": "nvmf_tgt_poll_group_000", 01:20:55.970 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b", 01:20:55.970 "listen_address": { 01:20:55.970 "trtype": "TCP", 01:20:55.970 "adrfam": "IPv4", 01:20:55.970 "traddr": "10.0.0.3", 01:20:55.970 "trsvcid": "4420" 01:20:55.970 }, 01:20:55.970 "peer_address": { 01:20:55.970 "trtype": "TCP", 01:20:55.970 "adrfam": "IPv4", 01:20:55.970 "traddr": "10.0.0.1", 01:20:55.970 "trsvcid": "44942" 01:20:55.970 }, 01:20:55.970 "auth": { 01:20:55.970 "state": "completed", 01:20:55.970 "digest": "sha384", 01:20:55.970 "dhgroup": "ffdhe4096" 01:20:55.970 } 01:20:55.970 } 01:20:55.970 ]' 01:20:55.970 05:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:20:56.243 05:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:20:56.243 05:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:20:56.243 05:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 01:20:56.243 05:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:20:56.243 05:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:20:56.243 05:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:20:56.243 05:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:20:56.502 05:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzhmN2E2YWFmNzdlZDZjYjg3ODQyZjU4Y2I3NjM2OTNiMzFiMzhjMTQyZmNjZGYxzNn33Q==: --dhchap-ctrl-secret DHHC-1:03:YjVlM2M3ZmUyMjRlM2Q3ZGYxZDljNWQ2YzFlOGI5Yzg1NDJjOTA5NmQ4YTQyMmVhZjRiMTBlNmQyODVmOTg0OT7Mb6E=: 01:20:56.502 05:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid 0567fff2-ddaf-4a4f-877d-a2600d7e662b -l 0 --dhchap-secret DHHC-1:00:NzhmN2E2YWFmNzdlZDZjYjg3ODQyZjU4Y2I3NjM2OTNiMzFiMzhjMTQyZmNjZGYxzNn33Q==: --dhchap-ctrl-secret DHHC-1:03:YjVlM2M3ZmUyMjRlM2Q3ZGYxZDljNWQ2YzFlOGI5Yzg1NDJjOTA5NmQ4YTQyMmVhZjRiMTBlNmQyODVmOTg0OT7Mb6E=: 01:20:57.071 05:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:20:57.071 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:20:57.071 05:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:20:57.071 05:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:57.071 05:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:57.071 05:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:57.071 05:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:20:57.071 05:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:20:57.071 05:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:20:57.330 05:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 01:20:57.330 05:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:20:57.330 05:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 01:20:57.330 05:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 01:20:57.330 05:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 01:20:57.330 05:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:20:57.330 05:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:20:57.330 05:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:57.330 05:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:57.330 05:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:57.330 05:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:20:57.330 05:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:20:57.330 05:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:20:57.590 01:20:57.590 05:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:20:57.590 05:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:20:57.590 05:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:20:57.850 05:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:20:57.850 05:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:20:57.850 05:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:57.850 05:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:57.850 05:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:57.850 05:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:20:57.850 { 01:20:57.850 "cntlid": 75, 01:20:57.850 "qid": 0, 01:20:57.850 "state": "enabled", 01:20:57.850 "thread": "nvmf_tgt_poll_group_000", 01:20:57.850 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b", 01:20:57.850 "listen_address": { 01:20:57.850 "trtype": "TCP", 01:20:57.850 "adrfam": "IPv4", 01:20:57.850 "traddr": "10.0.0.3", 01:20:57.850 "trsvcid": "4420" 01:20:57.850 }, 01:20:57.850 "peer_address": { 01:20:57.850 "trtype": "TCP", 01:20:57.850 "adrfam": "IPv4", 01:20:57.850 "traddr": "10.0.0.1", 01:20:57.850 "trsvcid": "44966" 01:20:57.850 }, 01:20:57.850 "auth": { 01:20:57.850 "state": "completed", 01:20:57.850 "digest": "sha384", 01:20:57.850 "dhgroup": "ffdhe4096" 01:20:57.850 } 01:20:57.850 } 01:20:57.850 ]' 01:20:57.850 05:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:20:57.850 05:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:20:57.850 05:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:20:57.850 05:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 01:20:57.850 05:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:20:58.110 05:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:20:58.110 05:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:20:58.110 05:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:20:58.110 05:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjJiNGM5NTJjMTgzMWFlZWIxOWEwZmE0MWI2ZjYzYWFcYrja: --dhchap-ctrl-secret DHHC-1:02:MzJhMjI4Zjk0NmU0ZTdiZDk4MWQzYjdlZDg2ZGNmZDUxMDhmYTM4YTkzMmY1ZWJi1Ai+GA==: 01:20:58.110 05:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid 0567fff2-ddaf-4a4f-877d-a2600d7e662b -l 0 --dhchap-secret DHHC-1:01:NjJiNGM5NTJjMTgzMWFlZWIxOWEwZmE0MWI2ZjYzYWFcYrja: --dhchap-ctrl-secret DHHC-1:02:MzJhMjI4Zjk0NmU0ZTdiZDk4MWQzYjdlZDg2ZGNmZDUxMDhmYTM4YTkzMmY1ZWJi1Ai+GA==: 01:20:59.048 05:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:20:59.048 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:20:59.048 05:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:20:59.048 05:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:59.048 05:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:59.048 05:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:59.048 05:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:20:59.048 05:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:20:59.049 05:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:20:59.049 05:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 01:20:59.049 05:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:20:59.049 05:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 01:20:59.049 05:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 01:20:59.049 05:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 01:20:59.049 05:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:20:59.049 05:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:20:59.049 05:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:59.049 05:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:59.049 05:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:59.049 05:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:20:59.049 05:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:20:59.049 05:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:20:59.309 01:20:59.569 05:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:20:59.569 05:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:20:59.569 05:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:20:59.569 05:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:20:59.569 05:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:20:59.569 05:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:20:59.569 05:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:20:59.569 05:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:20:59.569 05:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:20:59.569 { 01:20:59.569 "cntlid": 77, 01:20:59.569 "qid": 0, 01:20:59.569 "state": "enabled", 01:20:59.569 "thread": "nvmf_tgt_poll_group_000", 01:20:59.569 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b", 01:20:59.569 "listen_address": { 01:20:59.569 "trtype": "TCP", 01:20:59.569 "adrfam": "IPv4", 01:20:59.569 "traddr": "10.0.0.3", 01:20:59.569 "trsvcid": "4420" 01:20:59.569 }, 01:20:59.569 "peer_address": { 01:20:59.569 "trtype": "TCP", 01:20:59.569 "adrfam": "IPv4", 01:20:59.569 "traddr": "10.0.0.1", 01:20:59.569 "trsvcid": "45002" 01:20:59.569 }, 01:20:59.569 "auth": { 01:20:59.569 "state": "completed", 01:20:59.569 "digest": "sha384", 01:20:59.569 "dhgroup": "ffdhe4096" 01:20:59.569 } 01:20:59.569 } 01:20:59.569 ]' 01:20:59.569 05:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:20:59.878 05:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:20:59.878 05:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:20:59.878 05:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 01:20:59.878 05:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:20:59.878 05:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:20:59.878 05:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:20:59.878 05:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:21:00.137 05:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzdjNGMxMDA2YjgxNjkwM2EwMDA4OTE1MTNmODhkOTRlODJhNGRjYWIzYWI3YmM5CzI+UQ==: --dhchap-ctrl-secret DHHC-1:01:MjZhMDE4N2E2Y2M1ZmIwMjE0OTNjNDkyMGZmOTI1MWFTSBIt: 01:21:00.137 05:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid 0567fff2-ddaf-4a4f-877d-a2600d7e662b -l 0 --dhchap-secret DHHC-1:02:MzdjNGMxMDA2YjgxNjkwM2EwMDA4OTE1MTNmODhkOTRlODJhNGRjYWIzYWI3YmM5CzI+UQ==: --dhchap-ctrl-secret DHHC-1:01:MjZhMDE4N2E2Y2M1ZmIwMjE0OTNjNDkyMGZmOTI1MWFTSBIt: 01:21:00.706 05:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:21:00.706 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:21:00.706 05:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:21:00.706 05:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:00.706 05:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:00.706 05:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:00.706 05:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:21:00.706 05:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:21:00.706 05:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:21:00.706 05:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 01:21:00.706 05:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:21:00.706 05:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 01:21:00.706 05:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 01:21:00.706 05:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 01:21:00.706 05:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:21:00.706 05:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key3 01:21:00.706 05:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:00.706 05:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:00.706 05:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:00.706 05:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 01:21:00.706 05:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:21:00.706 05:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:21:01.275 01:21:01.275 05:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:21:01.275 05:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:21:01.275 05:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:21:01.275 05:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:21:01.275 05:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:21:01.275 05:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:01.275 05:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:01.276 05:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:01.276 05:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:21:01.276 { 01:21:01.276 "cntlid": 79, 01:21:01.276 "qid": 0, 01:21:01.276 "state": "enabled", 01:21:01.276 "thread": "nvmf_tgt_poll_group_000", 01:21:01.276 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b", 01:21:01.276 "listen_address": { 01:21:01.276 "trtype": "TCP", 01:21:01.276 "adrfam": "IPv4", 01:21:01.276 "traddr": "10.0.0.3", 01:21:01.276 "trsvcid": "4420" 01:21:01.276 }, 01:21:01.276 "peer_address": { 01:21:01.276 "trtype": "TCP", 01:21:01.276 "adrfam": "IPv4", 01:21:01.276 "traddr": "10.0.0.1", 01:21:01.276 "trsvcid": "44950" 01:21:01.276 }, 01:21:01.276 "auth": { 01:21:01.276 "state": "completed", 01:21:01.276 "digest": "sha384", 01:21:01.276 "dhgroup": "ffdhe4096" 01:21:01.276 } 01:21:01.276 } 01:21:01.276 ]' 01:21:01.276 05:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:21:01.535 05:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:21:01.535 05:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:21:01.535 05:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 01:21:01.535 05:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:21:01.535 05:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:21:01.535 05:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:21:01.535 05:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:21:01.795 05:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzlmMWVmZDI2OWNkMWEwYmVkNjk1OGU1MDI2MDUwODM2YmRjOWFiZGZhNTJhMmZiNjQ1YjM3YWI4YzFjZmExZXKHK6Y=: 01:21:01.795 05:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid 0567fff2-ddaf-4a4f-877d-a2600d7e662b -l 0 --dhchap-secret DHHC-1:03:NzlmMWVmZDI2OWNkMWEwYmVkNjk1OGU1MDI2MDUwODM2YmRjOWFiZGZhNTJhMmZiNjQ1YjM3YWI4YzFjZmExZXKHK6Y=: 01:21:02.362 05:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:21:02.362 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:21:02.362 05:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:21:02.362 05:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:02.362 05:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:02.362 05:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:02.362 05:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 01:21:02.362 05:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:21:02.362 05:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:21:02.362 05:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:21:02.620 05:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 01:21:02.620 05:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:21:02.620 05:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 01:21:02.620 05:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 01:21:02.620 05:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 01:21:02.620 05:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:21:02.620 05:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:21:02.620 05:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:02.620 05:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:02.620 05:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:02.620 05:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:21:02.620 05:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:21:02.620 05:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:21:02.878 01:21:02.878 05:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:21:02.878 05:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:21:02.878 05:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:21:03.137 05:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:21:03.137 05:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:21:03.137 05:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:03.137 05:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:03.137 05:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:03.137 05:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:21:03.137 { 01:21:03.137 "cntlid": 81, 01:21:03.137 "qid": 0, 01:21:03.137 "state": "enabled", 01:21:03.137 "thread": "nvmf_tgt_poll_group_000", 01:21:03.137 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b", 01:21:03.137 "listen_address": { 01:21:03.137 "trtype": "TCP", 01:21:03.137 "adrfam": "IPv4", 01:21:03.137 "traddr": "10.0.0.3", 01:21:03.137 "trsvcid": "4420" 01:21:03.137 }, 01:21:03.137 "peer_address": { 01:21:03.137 "trtype": "TCP", 01:21:03.137 "adrfam": "IPv4", 01:21:03.137 "traddr": "10.0.0.1", 01:21:03.137 "trsvcid": "44976" 01:21:03.137 }, 01:21:03.137 "auth": { 01:21:03.137 "state": "completed", 01:21:03.137 "digest": "sha384", 01:21:03.137 "dhgroup": "ffdhe6144" 01:21:03.137 } 01:21:03.137 } 01:21:03.137 ]' 01:21:03.137 05:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:21:03.137 05:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:21:03.137 05:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:21:03.395 05:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 01:21:03.396 05:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:21:03.396 05:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:21:03.396 05:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:21:03.396 05:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:21:03.396 05:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzhmN2E2YWFmNzdlZDZjYjg3ODQyZjU4Y2I3NjM2OTNiMzFiMzhjMTQyZmNjZGYxzNn33Q==: --dhchap-ctrl-secret DHHC-1:03:YjVlM2M3ZmUyMjRlM2Q3ZGYxZDljNWQ2YzFlOGI5Yzg1NDJjOTA5NmQ4YTQyMmVhZjRiMTBlNmQyODVmOTg0OT7Mb6E=: 01:21:03.396 05:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid 0567fff2-ddaf-4a4f-877d-a2600d7e662b -l 0 --dhchap-secret DHHC-1:00:NzhmN2E2YWFmNzdlZDZjYjg3ODQyZjU4Y2I3NjM2OTNiMzFiMzhjMTQyZmNjZGYxzNn33Q==: --dhchap-ctrl-secret DHHC-1:03:YjVlM2M3ZmUyMjRlM2Q3ZGYxZDljNWQ2YzFlOGI5Yzg1NDJjOTA5NmQ4YTQyMmVhZjRiMTBlNmQyODVmOTg0OT7Mb6E=: 01:21:04.331 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:21:04.331 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:21:04.331 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:21:04.331 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:04.331 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:04.331 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:04.331 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:21:04.331 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:21:04.331 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:21:04.331 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 01:21:04.331 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:21:04.331 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 01:21:04.331 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 01:21:04.331 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 01:21:04.331 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:21:04.331 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:21:04.331 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:04.331 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:04.331 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:04.331 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:21:04.331 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:21:04.331 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:21:04.898 01:21:04.898 05:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:21:04.898 05:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:21:04.898 05:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:21:04.898 05:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:21:04.898 05:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:21:04.898 05:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:04.898 05:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:04.898 05:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:04.898 05:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:21:04.898 { 01:21:04.898 "cntlid": 83, 01:21:04.898 "qid": 0, 01:21:04.898 "state": "enabled", 01:21:04.898 "thread": "nvmf_tgt_poll_group_000", 01:21:04.898 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b", 01:21:04.898 "listen_address": { 01:21:04.898 "trtype": "TCP", 01:21:04.898 "adrfam": "IPv4", 01:21:04.898 "traddr": "10.0.0.3", 01:21:04.898 "trsvcid": "4420" 01:21:04.898 }, 01:21:04.898 "peer_address": { 01:21:04.898 "trtype": "TCP", 01:21:04.898 "adrfam": "IPv4", 01:21:04.898 "traddr": "10.0.0.1", 01:21:04.898 "trsvcid": "45012" 01:21:04.898 }, 01:21:04.898 "auth": { 01:21:04.898 "state": "completed", 01:21:04.898 "digest": "sha384", 01:21:04.898 "dhgroup": "ffdhe6144" 01:21:04.898 } 01:21:04.898 } 01:21:04.898 ]' 01:21:04.898 05:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:21:05.157 05:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:21:05.157 05:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:21:05.157 05:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 01:21:05.157 05:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:21:05.157 05:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:21:05.157 05:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:21:05.157 05:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:21:05.416 05:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjJiNGM5NTJjMTgzMWFlZWIxOWEwZmE0MWI2ZjYzYWFcYrja: --dhchap-ctrl-secret DHHC-1:02:MzJhMjI4Zjk0NmU0ZTdiZDk4MWQzYjdlZDg2ZGNmZDUxMDhmYTM4YTkzMmY1ZWJi1Ai+GA==: 01:21:05.416 05:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid 0567fff2-ddaf-4a4f-877d-a2600d7e662b -l 0 --dhchap-secret DHHC-1:01:NjJiNGM5NTJjMTgzMWFlZWIxOWEwZmE0MWI2ZjYzYWFcYrja: --dhchap-ctrl-secret DHHC-1:02:MzJhMjI4Zjk0NmU0ZTdiZDk4MWQzYjdlZDg2ZGNmZDUxMDhmYTM4YTkzMmY1ZWJi1Ai+GA==: 01:21:05.985 05:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:21:05.985 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:21:05.985 05:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:21:05.985 05:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:05.985 05:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:05.985 05:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:05.985 05:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:21:05.985 05:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:21:05.985 05:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:21:06.245 05:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 01:21:06.245 05:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:21:06.245 05:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 01:21:06.245 05:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 01:21:06.245 05:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 01:21:06.245 05:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:21:06.245 05:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:21:06.245 05:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:06.245 05:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:06.245 05:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:06.245 05:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:21:06.245 05:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:21:06.245 05:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:21:06.505 01:21:06.505 05:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:21:06.505 05:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:21:06.505 05:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:21:06.762 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:21:06.762 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:21:06.762 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:06.762 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:06.762 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:06.762 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:21:06.762 { 01:21:06.762 "cntlid": 85, 01:21:06.762 "qid": 0, 01:21:06.762 "state": "enabled", 01:21:06.762 "thread": "nvmf_tgt_poll_group_000", 01:21:06.762 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b", 01:21:06.762 "listen_address": { 01:21:06.762 "trtype": "TCP", 01:21:06.762 "adrfam": "IPv4", 01:21:06.762 "traddr": "10.0.0.3", 01:21:06.762 "trsvcid": "4420" 01:21:06.762 }, 01:21:06.762 "peer_address": { 01:21:06.762 "trtype": "TCP", 01:21:06.762 "adrfam": "IPv4", 01:21:06.762 "traddr": "10.0.0.1", 01:21:06.762 "trsvcid": "45046" 01:21:06.762 }, 01:21:06.762 "auth": { 01:21:06.762 "state": "completed", 01:21:06.762 "digest": "sha384", 01:21:06.762 "dhgroup": "ffdhe6144" 01:21:06.762 } 01:21:06.762 } 01:21:06.762 ]' 01:21:06.762 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:21:06.762 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:21:06.762 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:21:06.762 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 01:21:06.762 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:21:07.020 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:21:07.020 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:21:07.020 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:21:07.020 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzdjNGMxMDA2YjgxNjkwM2EwMDA4OTE1MTNmODhkOTRlODJhNGRjYWIzYWI3YmM5CzI+UQ==: --dhchap-ctrl-secret DHHC-1:01:MjZhMDE4N2E2Y2M1ZmIwMjE0OTNjNDkyMGZmOTI1MWFTSBIt: 01:21:07.020 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid 0567fff2-ddaf-4a4f-877d-a2600d7e662b -l 0 --dhchap-secret DHHC-1:02:MzdjNGMxMDA2YjgxNjkwM2EwMDA4OTE1MTNmODhkOTRlODJhNGRjYWIzYWI3YmM5CzI+UQ==: --dhchap-ctrl-secret DHHC-1:01:MjZhMDE4N2E2Y2M1ZmIwMjE0OTNjNDkyMGZmOTI1MWFTSBIt: 01:21:07.954 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:21:07.954 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:21:07.954 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:21:07.954 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:07.954 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:07.954 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:07.954 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:21:07.954 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:21:07.954 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:21:07.954 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 01:21:07.954 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:21:07.954 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 01:21:07.954 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 01:21:07.954 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 01:21:07.954 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:21:07.954 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key3 01:21:07.954 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:07.954 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:07.954 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:07.954 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 01:21:07.954 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:21:07.954 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:21:08.528 01:21:08.528 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:21:08.528 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:21:08.528 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:21:08.528 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:21:08.528 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:21:08.528 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:08.528 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:08.820 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:08.820 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:21:08.820 { 01:21:08.820 "cntlid": 87, 01:21:08.820 "qid": 0, 01:21:08.820 "state": "enabled", 01:21:08.820 "thread": "nvmf_tgt_poll_group_000", 01:21:08.820 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b", 01:21:08.820 "listen_address": { 01:21:08.820 "trtype": "TCP", 01:21:08.820 "adrfam": "IPv4", 01:21:08.820 "traddr": "10.0.0.3", 01:21:08.820 "trsvcid": "4420" 01:21:08.820 }, 01:21:08.820 "peer_address": { 01:21:08.820 "trtype": "TCP", 01:21:08.820 "adrfam": "IPv4", 01:21:08.820 "traddr": "10.0.0.1", 01:21:08.820 "trsvcid": "45062" 01:21:08.820 }, 01:21:08.820 "auth": { 01:21:08.820 "state": "completed", 01:21:08.820 "digest": "sha384", 01:21:08.820 "dhgroup": "ffdhe6144" 01:21:08.820 } 01:21:08.820 } 01:21:08.820 ]' 01:21:08.820 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:21:08.820 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:21:08.820 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:21:08.820 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 01:21:08.820 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:21:08.820 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:21:08.820 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:21:08.820 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:21:09.079 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzlmMWVmZDI2OWNkMWEwYmVkNjk1OGU1MDI2MDUwODM2YmRjOWFiZGZhNTJhMmZiNjQ1YjM3YWI4YzFjZmExZXKHK6Y=: 01:21:09.079 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid 0567fff2-ddaf-4a4f-877d-a2600d7e662b -l 0 --dhchap-secret DHHC-1:03:NzlmMWVmZDI2OWNkMWEwYmVkNjk1OGU1MDI2MDUwODM2YmRjOWFiZGZhNTJhMmZiNjQ1YjM3YWI4YzFjZmExZXKHK6Y=: 01:21:09.646 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:21:09.646 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:21:09.646 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:21:09.646 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:09.646 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:09.646 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:09.646 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 01:21:09.646 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:21:09.646 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:21:09.646 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:21:09.905 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 01:21:09.905 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:21:09.905 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 01:21:09.905 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 01:21:09.905 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 01:21:09.905 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:21:09.905 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:21:09.905 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:09.905 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:09.905 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:09.905 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:21:09.905 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:21:09.905 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:21:10.472 01:21:10.472 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:21:10.472 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:21:10.731 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:21:10.731 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:21:10.731 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:21:10.731 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:10.731 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:10.731 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:10.731 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:21:10.732 { 01:21:10.732 "cntlid": 89, 01:21:10.732 "qid": 0, 01:21:10.732 "state": "enabled", 01:21:10.732 "thread": "nvmf_tgt_poll_group_000", 01:21:10.732 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b", 01:21:10.732 "listen_address": { 01:21:10.732 "trtype": "TCP", 01:21:10.732 "adrfam": "IPv4", 01:21:10.732 "traddr": "10.0.0.3", 01:21:10.732 "trsvcid": "4420" 01:21:10.732 }, 01:21:10.732 "peer_address": { 01:21:10.732 "trtype": "TCP", 01:21:10.732 "adrfam": "IPv4", 01:21:10.732 "traddr": "10.0.0.1", 01:21:10.732 "trsvcid": "42310" 01:21:10.732 }, 01:21:10.732 "auth": { 01:21:10.732 "state": "completed", 01:21:10.732 "digest": "sha384", 01:21:10.732 "dhgroup": "ffdhe8192" 01:21:10.732 } 01:21:10.732 } 01:21:10.732 ]' 01:21:10.990 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:21:10.990 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:21:10.990 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:21:10.990 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 01:21:10.990 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:21:10.990 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:21:10.990 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:21:10.990 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:21:11.248 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzhmN2E2YWFmNzdlZDZjYjg3ODQyZjU4Y2I3NjM2OTNiMzFiMzhjMTQyZmNjZGYxzNn33Q==: --dhchap-ctrl-secret DHHC-1:03:YjVlM2M3ZmUyMjRlM2Q3ZGYxZDljNWQ2YzFlOGI5Yzg1NDJjOTA5NmQ4YTQyMmVhZjRiMTBlNmQyODVmOTg0OT7Mb6E=: 01:21:11.248 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid 0567fff2-ddaf-4a4f-877d-a2600d7e662b -l 0 --dhchap-secret DHHC-1:00:NzhmN2E2YWFmNzdlZDZjYjg3ODQyZjU4Y2I3NjM2OTNiMzFiMzhjMTQyZmNjZGYxzNn33Q==: --dhchap-ctrl-secret DHHC-1:03:YjVlM2M3ZmUyMjRlM2Q3ZGYxZDljNWQ2YzFlOGI5Yzg1NDJjOTA5NmQ4YTQyMmVhZjRiMTBlNmQyODVmOTg0OT7Mb6E=: 01:21:11.815 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:21:11.815 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:21:11.816 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:21:11.816 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:11.816 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:11.816 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:11.816 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:21:11.816 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:21:11.816 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:21:12.074 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 01:21:12.074 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:21:12.074 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 01:21:12.074 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 01:21:12.074 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 01:21:12.074 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:21:12.074 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:21:12.074 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:12.074 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:12.074 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:12.074 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:21:12.074 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:21:12.074 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:21:12.641 01:21:12.641 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:21:12.641 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:21:12.641 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:21:12.900 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:21:12.900 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:21:12.900 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:12.900 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:12.900 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:12.900 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:21:12.900 { 01:21:12.900 "cntlid": 91, 01:21:12.900 "qid": 0, 01:21:12.900 "state": "enabled", 01:21:12.900 "thread": "nvmf_tgt_poll_group_000", 01:21:12.900 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b", 01:21:12.900 "listen_address": { 01:21:12.900 "trtype": "TCP", 01:21:12.900 "adrfam": "IPv4", 01:21:12.900 "traddr": "10.0.0.3", 01:21:12.900 "trsvcid": "4420" 01:21:12.900 }, 01:21:12.900 "peer_address": { 01:21:12.900 "trtype": "TCP", 01:21:12.900 "adrfam": "IPv4", 01:21:12.900 "traddr": "10.0.0.1", 01:21:12.900 "trsvcid": "42340" 01:21:12.900 }, 01:21:12.900 "auth": { 01:21:12.900 "state": "completed", 01:21:12.900 "digest": "sha384", 01:21:12.900 "dhgroup": "ffdhe8192" 01:21:12.900 } 01:21:12.900 } 01:21:12.900 ]' 01:21:12.900 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:21:12.900 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:21:12.900 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:21:12.900 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 01:21:12.900 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:21:12.900 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:21:12.900 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:21:12.900 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:21:13.158 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjJiNGM5NTJjMTgzMWFlZWIxOWEwZmE0MWI2ZjYzYWFcYrja: --dhchap-ctrl-secret DHHC-1:02:MzJhMjI4Zjk0NmU0ZTdiZDk4MWQzYjdlZDg2ZGNmZDUxMDhmYTM4YTkzMmY1ZWJi1Ai+GA==: 01:21:13.158 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid 0567fff2-ddaf-4a4f-877d-a2600d7e662b -l 0 --dhchap-secret DHHC-1:01:NjJiNGM5NTJjMTgzMWFlZWIxOWEwZmE0MWI2ZjYzYWFcYrja: --dhchap-ctrl-secret DHHC-1:02:MzJhMjI4Zjk0NmU0ZTdiZDk4MWQzYjdlZDg2ZGNmZDUxMDhmYTM4YTkzMmY1ZWJi1Ai+GA==: 01:21:13.725 05:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:21:13.725 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:21:13.725 05:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:21:13.725 05:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:13.725 05:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:13.725 05:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:13.725 05:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:21:13.725 05:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:21:13.725 05:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:21:13.983 05:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 01:21:13.983 05:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:21:13.983 05:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 01:21:13.983 05:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 01:21:13.983 05:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 01:21:13.983 05:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:21:13.983 05:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:21:13.983 05:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:13.983 05:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:13.983 05:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:13.983 05:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:21:13.983 05:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:21:13.983 05:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:21:14.549 01:21:14.549 05:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:21:14.549 05:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:21:14.549 05:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:21:14.807 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:21:14.807 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:21:14.807 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:14.807 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:14.807 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:14.807 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:21:14.807 { 01:21:14.807 "cntlid": 93, 01:21:14.807 "qid": 0, 01:21:14.807 "state": "enabled", 01:21:14.807 "thread": "nvmf_tgt_poll_group_000", 01:21:14.807 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b", 01:21:14.807 "listen_address": { 01:21:14.807 "trtype": "TCP", 01:21:14.807 "adrfam": "IPv4", 01:21:14.807 "traddr": "10.0.0.3", 01:21:14.807 "trsvcid": "4420" 01:21:14.807 }, 01:21:14.807 "peer_address": { 01:21:14.807 "trtype": "TCP", 01:21:14.807 "adrfam": "IPv4", 01:21:14.807 "traddr": "10.0.0.1", 01:21:14.807 "trsvcid": "42372" 01:21:14.807 }, 01:21:14.807 "auth": { 01:21:14.807 "state": "completed", 01:21:14.807 "digest": "sha384", 01:21:14.807 "dhgroup": "ffdhe8192" 01:21:14.807 } 01:21:14.807 } 01:21:14.807 ]' 01:21:14.807 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:21:14.807 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:21:14.807 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:21:14.807 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 01:21:14.807 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:21:15.067 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:21:15.067 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:21:15.067 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:21:15.067 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzdjNGMxMDA2YjgxNjkwM2EwMDA4OTE1MTNmODhkOTRlODJhNGRjYWIzYWI3YmM5CzI+UQ==: --dhchap-ctrl-secret DHHC-1:01:MjZhMDE4N2E2Y2M1ZmIwMjE0OTNjNDkyMGZmOTI1MWFTSBIt: 01:21:15.067 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid 0567fff2-ddaf-4a4f-877d-a2600d7e662b -l 0 --dhchap-secret DHHC-1:02:MzdjNGMxMDA2YjgxNjkwM2EwMDA4OTE1MTNmODhkOTRlODJhNGRjYWIzYWI3YmM5CzI+UQ==: --dhchap-ctrl-secret DHHC-1:01:MjZhMDE4N2E2Y2M1ZmIwMjE0OTNjNDkyMGZmOTI1MWFTSBIt: 01:21:16.001 05:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:21:16.002 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:21:16.002 05:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:21:16.002 05:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:16.002 05:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:16.002 05:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:16.002 05:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:21:16.002 05:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:21:16.002 05:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:21:16.002 05:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 01:21:16.002 05:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:21:16.002 05:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 01:21:16.002 05:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 01:21:16.002 05:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 01:21:16.002 05:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:21:16.002 05:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key3 01:21:16.002 05:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:16.002 05:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:16.002 05:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:16.002 05:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 01:21:16.002 05:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:21:16.002 05:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:21:16.567 01:21:16.567 05:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:21:16.567 05:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:21:16.567 05:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:21:16.826 05:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:21:16.826 05:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:21:16.826 05:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:16.826 05:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:16.826 05:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:16.826 05:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:21:16.826 { 01:21:16.826 "cntlid": 95, 01:21:16.826 "qid": 0, 01:21:16.826 "state": "enabled", 01:21:16.826 "thread": "nvmf_tgt_poll_group_000", 01:21:16.826 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b", 01:21:16.826 "listen_address": { 01:21:16.826 "trtype": "TCP", 01:21:16.826 "adrfam": "IPv4", 01:21:16.826 "traddr": "10.0.0.3", 01:21:16.826 "trsvcid": "4420" 01:21:16.826 }, 01:21:16.826 "peer_address": { 01:21:16.826 "trtype": "TCP", 01:21:16.826 "adrfam": "IPv4", 01:21:16.826 "traddr": "10.0.0.1", 01:21:16.826 "trsvcid": "42392" 01:21:16.826 }, 01:21:16.826 "auth": { 01:21:16.826 "state": "completed", 01:21:16.826 "digest": "sha384", 01:21:16.826 "dhgroup": "ffdhe8192" 01:21:16.826 } 01:21:16.826 } 01:21:16.826 ]' 01:21:16.826 05:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:21:16.826 05:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:21:16.826 05:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:21:16.826 05:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 01:21:16.826 05:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:21:16.826 05:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:21:16.826 05:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:21:16.826 05:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:21:17.393 05:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzlmMWVmZDI2OWNkMWEwYmVkNjk1OGU1MDI2MDUwODM2YmRjOWFiZGZhNTJhMmZiNjQ1YjM3YWI4YzFjZmExZXKHK6Y=: 01:21:17.393 05:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid 0567fff2-ddaf-4a4f-877d-a2600d7e662b -l 0 --dhchap-secret DHHC-1:03:NzlmMWVmZDI2OWNkMWEwYmVkNjk1OGU1MDI2MDUwODM2YmRjOWFiZGZhNTJhMmZiNjQ1YjM3YWI4YzFjZmExZXKHK6Y=: 01:21:17.960 05:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:21:17.960 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:21:17.960 05:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:21:17.960 05:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:17.960 05:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:17.960 05:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:17.960 05:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 01:21:17.960 05:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 01:21:17.960 05:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:21:17.960 05:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 01:21:17.960 05:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 01:21:18.217 05:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 01:21:18.217 05:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:21:18.218 05:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:21:18.218 05:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 01:21:18.218 05:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 01:21:18.218 05:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:21:18.218 05:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:21:18.218 05:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:18.218 05:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:18.218 05:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:18.218 05:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:21:18.218 05:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:21:18.218 05:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:21:18.484 01:21:18.484 05:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:21:18.484 05:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:21:18.484 05:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:21:18.484 05:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:21:18.484 05:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:21:18.484 05:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:18.484 05:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:18.484 05:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:18.484 05:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:21:18.484 { 01:21:18.484 "cntlid": 97, 01:21:18.484 "qid": 0, 01:21:18.484 "state": "enabled", 01:21:18.484 "thread": "nvmf_tgt_poll_group_000", 01:21:18.484 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b", 01:21:18.484 "listen_address": { 01:21:18.484 "trtype": "TCP", 01:21:18.484 "adrfam": "IPv4", 01:21:18.484 "traddr": "10.0.0.3", 01:21:18.484 "trsvcid": "4420" 01:21:18.484 }, 01:21:18.484 "peer_address": { 01:21:18.484 "trtype": "TCP", 01:21:18.484 "adrfam": "IPv4", 01:21:18.484 "traddr": "10.0.0.1", 01:21:18.484 "trsvcid": "42428" 01:21:18.484 }, 01:21:18.484 "auth": { 01:21:18.485 "state": "completed", 01:21:18.485 "digest": "sha512", 01:21:18.485 "dhgroup": "null" 01:21:18.485 } 01:21:18.485 } 01:21:18.485 ]' 01:21:18.744 05:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:21:18.744 05:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:21:18.744 05:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:21:18.744 05:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 01:21:18.744 05:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:21:18.744 05:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:21:18.744 05:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:21:18.744 05:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:21:19.003 05:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzhmN2E2YWFmNzdlZDZjYjg3ODQyZjU4Y2I3NjM2OTNiMzFiMzhjMTQyZmNjZGYxzNn33Q==: --dhchap-ctrl-secret DHHC-1:03:YjVlM2M3ZmUyMjRlM2Q3ZGYxZDljNWQ2YzFlOGI5Yzg1NDJjOTA5NmQ4YTQyMmVhZjRiMTBlNmQyODVmOTg0OT7Mb6E=: 01:21:19.003 05:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid 0567fff2-ddaf-4a4f-877d-a2600d7e662b -l 0 --dhchap-secret DHHC-1:00:NzhmN2E2YWFmNzdlZDZjYjg3ODQyZjU4Y2I3NjM2OTNiMzFiMzhjMTQyZmNjZGYxzNn33Q==: --dhchap-ctrl-secret DHHC-1:03:YjVlM2M3ZmUyMjRlM2Q3ZGYxZDljNWQ2YzFlOGI5Yzg1NDJjOTA5NmQ4YTQyMmVhZjRiMTBlNmQyODVmOTg0OT7Mb6E=: 01:21:19.571 05:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:21:19.571 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:21:19.571 05:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:21:19.571 05:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:19.571 05:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:19.571 05:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:19.571 05:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:21:19.571 05:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 01:21:19.571 05:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 01:21:19.830 05:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 01:21:19.830 05:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:21:19.830 05:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:21:19.830 05:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 01:21:19.830 05:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 01:21:19.830 05:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:21:19.830 05:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:21:19.830 05:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:19.830 05:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:19.830 05:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:19.830 05:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:21:19.830 05:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:21:19.830 05:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:21:20.089 01:21:20.089 05:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:21:20.089 05:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:21:20.089 05:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:21:20.348 05:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:21:20.348 05:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:21:20.348 05:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:20.348 05:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:20.348 05:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:20.348 05:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:21:20.348 { 01:21:20.348 "cntlid": 99, 01:21:20.348 "qid": 0, 01:21:20.348 "state": "enabled", 01:21:20.348 "thread": "nvmf_tgt_poll_group_000", 01:21:20.348 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b", 01:21:20.348 "listen_address": { 01:21:20.348 "trtype": "TCP", 01:21:20.348 "adrfam": "IPv4", 01:21:20.348 "traddr": "10.0.0.3", 01:21:20.348 "trsvcid": "4420" 01:21:20.348 }, 01:21:20.349 "peer_address": { 01:21:20.349 "trtype": "TCP", 01:21:20.349 "adrfam": "IPv4", 01:21:20.349 "traddr": "10.0.0.1", 01:21:20.349 "trsvcid": "35936" 01:21:20.349 }, 01:21:20.349 "auth": { 01:21:20.349 "state": "completed", 01:21:20.349 "digest": "sha512", 01:21:20.349 "dhgroup": "null" 01:21:20.349 } 01:21:20.349 } 01:21:20.349 ]' 01:21:20.349 05:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:21:20.349 05:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:21:20.349 05:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:21:20.349 05:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 01:21:20.349 05:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:21:20.349 05:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:21:20.349 05:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:21:20.349 05:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:21:20.607 05:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjJiNGM5NTJjMTgzMWFlZWIxOWEwZmE0MWI2ZjYzYWFcYrja: --dhchap-ctrl-secret DHHC-1:02:MzJhMjI4Zjk0NmU0ZTdiZDk4MWQzYjdlZDg2ZGNmZDUxMDhmYTM4YTkzMmY1ZWJi1Ai+GA==: 01:21:20.607 05:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid 0567fff2-ddaf-4a4f-877d-a2600d7e662b -l 0 --dhchap-secret DHHC-1:01:NjJiNGM5NTJjMTgzMWFlZWIxOWEwZmE0MWI2ZjYzYWFcYrja: --dhchap-ctrl-secret DHHC-1:02:MzJhMjI4Zjk0NmU0ZTdiZDk4MWQzYjdlZDg2ZGNmZDUxMDhmYTM4YTkzMmY1ZWJi1Ai+GA==: 01:21:21.175 05:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:21:21.175 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:21:21.175 05:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:21:21.175 05:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:21.175 05:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:21.175 05:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:21.175 05:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:21:21.176 05:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 01:21:21.176 05:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 01:21:21.434 05:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 01:21:21.435 05:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:21:21.435 05:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:21:21.435 05:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 01:21:21.435 05:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 01:21:21.435 05:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:21:21.435 05:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:21:21.435 05:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:21.435 05:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:21.435 05:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:21.435 05:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:21:21.435 05:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:21:21.435 05:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:21:21.693 01:21:21.693 05:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:21:21.693 05:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:21:21.693 05:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:21:21.952 05:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:21:21.952 05:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:21:21.952 05:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:21.952 05:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:21.952 05:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:21.952 05:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:21:21.952 { 01:21:21.952 "cntlid": 101, 01:21:21.952 "qid": 0, 01:21:21.952 "state": "enabled", 01:21:21.952 "thread": "nvmf_tgt_poll_group_000", 01:21:21.952 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b", 01:21:21.952 "listen_address": { 01:21:21.952 "trtype": "TCP", 01:21:21.952 "adrfam": "IPv4", 01:21:21.952 "traddr": "10.0.0.3", 01:21:21.952 "trsvcid": "4420" 01:21:21.952 }, 01:21:21.952 "peer_address": { 01:21:21.952 "trtype": "TCP", 01:21:21.952 "adrfam": "IPv4", 01:21:21.952 "traddr": "10.0.0.1", 01:21:21.952 "trsvcid": "35958" 01:21:21.952 }, 01:21:21.952 "auth": { 01:21:21.952 "state": "completed", 01:21:21.952 "digest": "sha512", 01:21:21.952 "dhgroup": "null" 01:21:21.952 } 01:21:21.952 } 01:21:21.952 ]' 01:21:21.952 05:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:21:21.952 05:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:21:21.952 05:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:21:21.952 05:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 01:21:21.952 05:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:21:21.952 05:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:21:21.952 05:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:21:21.952 05:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:21:22.213 05:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzdjNGMxMDA2YjgxNjkwM2EwMDA4OTE1MTNmODhkOTRlODJhNGRjYWIzYWI3YmM5CzI+UQ==: --dhchap-ctrl-secret DHHC-1:01:MjZhMDE4N2E2Y2M1ZmIwMjE0OTNjNDkyMGZmOTI1MWFTSBIt: 01:21:22.213 05:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid 0567fff2-ddaf-4a4f-877d-a2600d7e662b -l 0 --dhchap-secret DHHC-1:02:MzdjNGMxMDA2YjgxNjkwM2EwMDA4OTE1MTNmODhkOTRlODJhNGRjYWIzYWI3YmM5CzI+UQ==: --dhchap-ctrl-secret DHHC-1:01:MjZhMDE4N2E2Y2M1ZmIwMjE0OTNjNDkyMGZmOTI1MWFTSBIt: 01:21:22.780 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:21:22.780 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:21:22.780 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:21:22.780 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:22.780 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:22.780 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:22.780 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:21:22.780 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 01:21:22.780 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 01:21:23.039 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 01:21:23.039 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:21:23.039 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:21:23.039 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 01:21:23.039 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 01:21:23.039 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:21:23.039 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key3 01:21:23.039 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:23.039 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:23.039 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:23.039 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 01:21:23.039 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:21:23.039 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:21:23.298 01:21:23.298 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:21:23.298 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:21:23.298 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:21:23.557 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:21:23.557 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:21:23.557 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:23.557 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:23.557 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:23.557 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:21:23.557 { 01:21:23.557 "cntlid": 103, 01:21:23.557 "qid": 0, 01:21:23.557 "state": "enabled", 01:21:23.557 "thread": "nvmf_tgt_poll_group_000", 01:21:23.557 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b", 01:21:23.557 "listen_address": { 01:21:23.557 "trtype": "TCP", 01:21:23.557 "adrfam": "IPv4", 01:21:23.557 "traddr": "10.0.0.3", 01:21:23.557 "trsvcid": "4420" 01:21:23.557 }, 01:21:23.557 "peer_address": { 01:21:23.557 "trtype": "TCP", 01:21:23.557 "adrfam": "IPv4", 01:21:23.557 "traddr": "10.0.0.1", 01:21:23.557 "trsvcid": "35996" 01:21:23.557 }, 01:21:23.557 "auth": { 01:21:23.557 "state": "completed", 01:21:23.557 "digest": "sha512", 01:21:23.557 "dhgroup": "null" 01:21:23.557 } 01:21:23.557 } 01:21:23.557 ]' 01:21:23.557 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:21:23.557 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:21:23.557 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:21:23.816 05:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 01:21:23.816 05:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:21:23.816 05:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:21:23.816 05:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:21:23.816 05:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:21:24.075 05:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzlmMWVmZDI2OWNkMWEwYmVkNjk1OGU1MDI2MDUwODM2YmRjOWFiZGZhNTJhMmZiNjQ1YjM3YWI4YzFjZmExZXKHK6Y=: 01:21:24.075 05:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid 0567fff2-ddaf-4a4f-877d-a2600d7e662b -l 0 --dhchap-secret DHHC-1:03:NzlmMWVmZDI2OWNkMWEwYmVkNjk1OGU1MDI2MDUwODM2YmRjOWFiZGZhNTJhMmZiNjQ1YjM3YWI4YzFjZmExZXKHK6Y=: 01:21:24.642 05:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:21:24.642 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:21:24.642 05:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:21:24.642 05:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:24.642 05:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:24.642 05:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:24.642 05:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 01:21:24.642 05:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:21:24.642 05:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:21:24.642 05:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:21:24.901 05:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 01:21:24.901 05:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:21:24.901 05:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:21:24.901 05:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 01:21:24.901 05:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 01:21:24.901 05:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:21:24.901 05:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:21:24.901 05:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:24.901 05:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:24.901 05:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:24.901 05:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:21:24.901 05:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:21:24.901 05:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:21:25.160 01:21:25.160 05:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:21:25.160 05:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:21:25.160 05:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:21:25.419 05:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:21:25.419 05:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:21:25.419 05:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:25.419 05:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:25.419 05:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:25.419 05:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:21:25.419 { 01:21:25.419 "cntlid": 105, 01:21:25.419 "qid": 0, 01:21:25.419 "state": "enabled", 01:21:25.419 "thread": "nvmf_tgt_poll_group_000", 01:21:25.419 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b", 01:21:25.419 "listen_address": { 01:21:25.419 "trtype": "TCP", 01:21:25.419 "adrfam": "IPv4", 01:21:25.419 "traddr": "10.0.0.3", 01:21:25.419 "trsvcid": "4420" 01:21:25.419 }, 01:21:25.419 "peer_address": { 01:21:25.419 "trtype": "TCP", 01:21:25.419 "adrfam": "IPv4", 01:21:25.419 "traddr": "10.0.0.1", 01:21:25.419 "trsvcid": "36022" 01:21:25.419 }, 01:21:25.419 "auth": { 01:21:25.419 "state": "completed", 01:21:25.419 "digest": "sha512", 01:21:25.419 "dhgroup": "ffdhe2048" 01:21:25.419 } 01:21:25.419 } 01:21:25.419 ]' 01:21:25.419 05:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:21:25.419 05:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:21:25.419 05:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:21:25.419 05:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 01:21:25.419 05:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:21:25.680 05:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:21:25.680 05:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:21:25.680 05:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:21:25.938 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzhmN2E2YWFmNzdlZDZjYjg3ODQyZjU4Y2I3NjM2OTNiMzFiMzhjMTQyZmNjZGYxzNn33Q==: --dhchap-ctrl-secret DHHC-1:03:YjVlM2M3ZmUyMjRlM2Q3ZGYxZDljNWQ2YzFlOGI5Yzg1NDJjOTA5NmQ4YTQyMmVhZjRiMTBlNmQyODVmOTg0OT7Mb6E=: 01:21:25.938 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid 0567fff2-ddaf-4a4f-877d-a2600d7e662b -l 0 --dhchap-secret DHHC-1:00:NzhmN2E2YWFmNzdlZDZjYjg3ODQyZjU4Y2I3NjM2OTNiMzFiMzhjMTQyZmNjZGYxzNn33Q==: --dhchap-ctrl-secret DHHC-1:03:YjVlM2M3ZmUyMjRlM2Q3ZGYxZDljNWQ2YzFlOGI5Yzg1NDJjOTA5NmQ4YTQyMmVhZjRiMTBlNmQyODVmOTg0OT7Mb6E=: 01:21:26.506 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:21:26.506 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:21:26.506 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:21:26.506 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:26.506 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:26.506 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:26.506 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:21:26.506 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:21:26.506 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:21:26.506 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 01:21:26.506 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:21:26.506 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:21:26.506 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 01:21:26.506 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 01:21:26.506 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:21:26.506 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:21:26.506 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:26.506 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:26.764 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:26.764 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:21:26.764 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:21:26.764 05:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:21:27.022 01:21:27.022 05:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:21:27.022 05:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:21:27.023 05:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:21:27.280 05:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:21:27.280 05:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:21:27.280 05:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:27.280 05:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:27.280 05:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:27.280 05:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:21:27.281 { 01:21:27.281 "cntlid": 107, 01:21:27.281 "qid": 0, 01:21:27.281 "state": "enabled", 01:21:27.281 "thread": "nvmf_tgt_poll_group_000", 01:21:27.281 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b", 01:21:27.281 "listen_address": { 01:21:27.281 "trtype": "TCP", 01:21:27.281 "adrfam": "IPv4", 01:21:27.281 "traddr": "10.0.0.3", 01:21:27.281 "trsvcid": "4420" 01:21:27.281 }, 01:21:27.281 "peer_address": { 01:21:27.281 "trtype": "TCP", 01:21:27.281 "adrfam": "IPv4", 01:21:27.281 "traddr": "10.0.0.1", 01:21:27.281 "trsvcid": "36052" 01:21:27.281 }, 01:21:27.281 "auth": { 01:21:27.281 "state": "completed", 01:21:27.281 "digest": "sha512", 01:21:27.281 "dhgroup": "ffdhe2048" 01:21:27.281 } 01:21:27.281 } 01:21:27.281 ]' 01:21:27.281 05:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:21:27.281 05:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:21:27.281 05:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:21:27.281 05:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 01:21:27.281 05:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:21:27.281 05:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:21:27.281 05:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:21:27.281 05:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:21:27.539 05:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjJiNGM5NTJjMTgzMWFlZWIxOWEwZmE0MWI2ZjYzYWFcYrja: --dhchap-ctrl-secret DHHC-1:02:MzJhMjI4Zjk0NmU0ZTdiZDk4MWQzYjdlZDg2ZGNmZDUxMDhmYTM4YTkzMmY1ZWJi1Ai+GA==: 01:21:27.539 05:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid 0567fff2-ddaf-4a4f-877d-a2600d7e662b -l 0 --dhchap-secret DHHC-1:01:NjJiNGM5NTJjMTgzMWFlZWIxOWEwZmE0MWI2ZjYzYWFcYrja: --dhchap-ctrl-secret DHHC-1:02:MzJhMjI4Zjk0NmU0ZTdiZDk4MWQzYjdlZDg2ZGNmZDUxMDhmYTM4YTkzMmY1ZWJi1Ai+GA==: 01:21:28.106 05:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:21:28.106 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:21:28.106 05:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:21:28.106 05:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:28.106 05:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:28.106 05:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:28.106 05:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:21:28.106 05:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:21:28.106 05:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:21:28.365 05:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 01:21:28.365 05:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:21:28.365 05:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:21:28.365 05:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 01:21:28.365 05:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 01:21:28.365 05:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:21:28.365 05:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:21:28.365 05:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:28.365 05:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:28.365 05:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:28.365 05:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:21:28.365 05:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:21:28.365 05:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:21:28.623 01:21:28.623 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:21:28.623 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:21:28.623 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:21:28.883 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:21:28.883 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:21:28.883 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:28.883 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:28.883 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:28.883 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:21:28.883 { 01:21:28.883 "cntlid": 109, 01:21:28.883 "qid": 0, 01:21:28.883 "state": "enabled", 01:21:28.883 "thread": "nvmf_tgt_poll_group_000", 01:21:28.883 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b", 01:21:28.883 "listen_address": { 01:21:28.883 "trtype": "TCP", 01:21:28.883 "adrfam": "IPv4", 01:21:28.883 "traddr": "10.0.0.3", 01:21:28.883 "trsvcid": "4420" 01:21:28.883 }, 01:21:28.883 "peer_address": { 01:21:28.883 "trtype": "TCP", 01:21:28.883 "adrfam": "IPv4", 01:21:28.883 "traddr": "10.0.0.1", 01:21:28.883 "trsvcid": "36074" 01:21:28.883 }, 01:21:28.883 "auth": { 01:21:28.883 "state": "completed", 01:21:28.883 "digest": "sha512", 01:21:28.883 "dhgroup": "ffdhe2048" 01:21:28.883 } 01:21:28.883 } 01:21:28.883 ]' 01:21:28.883 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:21:28.883 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:21:28.883 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:21:28.883 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 01:21:28.883 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:21:29.142 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:21:29.142 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:21:29.142 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:21:29.401 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzdjNGMxMDA2YjgxNjkwM2EwMDA4OTE1MTNmODhkOTRlODJhNGRjYWIzYWI3YmM5CzI+UQ==: --dhchap-ctrl-secret DHHC-1:01:MjZhMDE4N2E2Y2M1ZmIwMjE0OTNjNDkyMGZmOTI1MWFTSBIt: 01:21:29.401 05:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid 0567fff2-ddaf-4a4f-877d-a2600d7e662b -l 0 --dhchap-secret DHHC-1:02:MzdjNGMxMDA2YjgxNjkwM2EwMDA4OTE1MTNmODhkOTRlODJhNGRjYWIzYWI3YmM5CzI+UQ==: --dhchap-ctrl-secret DHHC-1:01:MjZhMDE4N2E2Y2M1ZmIwMjE0OTNjNDkyMGZmOTI1MWFTSBIt: 01:21:29.968 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:21:29.968 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:21:29.968 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:21:29.968 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:29.968 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:29.968 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:29.968 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:21:29.968 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:21:29.968 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:21:30.226 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 01:21:30.226 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:21:30.226 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:21:30.226 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 01:21:30.226 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 01:21:30.226 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:21:30.226 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key3 01:21:30.226 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:30.226 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:30.226 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:30.226 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 01:21:30.226 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:21:30.226 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:21:30.483 01:21:30.483 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:21:30.483 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:21:30.484 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:21:30.742 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:21:30.742 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:21:30.742 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:30.742 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:30.742 05:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:30.742 05:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:21:30.742 { 01:21:30.742 "cntlid": 111, 01:21:30.742 "qid": 0, 01:21:30.742 "state": "enabled", 01:21:30.742 "thread": "nvmf_tgt_poll_group_000", 01:21:30.742 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b", 01:21:30.742 "listen_address": { 01:21:30.742 "trtype": "TCP", 01:21:30.742 "adrfam": "IPv4", 01:21:30.742 "traddr": "10.0.0.3", 01:21:30.742 "trsvcid": "4420" 01:21:30.742 }, 01:21:30.742 "peer_address": { 01:21:30.742 "trtype": "TCP", 01:21:30.742 "adrfam": "IPv4", 01:21:30.742 "traddr": "10.0.0.1", 01:21:30.742 "trsvcid": "40846" 01:21:30.742 }, 01:21:30.742 "auth": { 01:21:30.742 "state": "completed", 01:21:30.742 "digest": "sha512", 01:21:30.742 "dhgroup": "ffdhe2048" 01:21:30.742 } 01:21:30.742 } 01:21:30.742 ]' 01:21:30.742 05:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:21:30.742 05:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:21:30.742 05:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:21:30.742 05:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 01:21:30.742 05:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:21:30.742 05:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:21:30.742 05:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:21:30.742 05:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:21:31.001 05:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzlmMWVmZDI2OWNkMWEwYmVkNjk1OGU1MDI2MDUwODM2YmRjOWFiZGZhNTJhMmZiNjQ1YjM3YWI4YzFjZmExZXKHK6Y=: 01:21:31.001 05:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid 0567fff2-ddaf-4a4f-877d-a2600d7e662b -l 0 --dhchap-secret DHHC-1:03:NzlmMWVmZDI2OWNkMWEwYmVkNjk1OGU1MDI2MDUwODM2YmRjOWFiZGZhNTJhMmZiNjQ1YjM3YWI4YzFjZmExZXKHK6Y=: 01:21:31.571 05:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:21:31.571 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:21:31.571 05:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:21:31.571 05:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:31.571 05:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:31.571 05:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:31.571 05:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 01:21:31.571 05:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:21:31.571 05:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:21:31.571 05:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:21:31.831 05:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 01:21:31.831 05:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:21:31.831 05:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:21:31.831 05:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 01:21:31.831 05:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 01:21:31.831 05:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:21:31.831 05:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:21:31.831 05:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:31.831 05:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:31.831 05:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:31.831 05:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:21:31.831 05:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:21:31.831 05:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:21:32.090 01:21:32.090 05:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:21:32.090 05:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:21:32.090 05:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:21:32.350 05:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:21:32.350 05:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:21:32.350 05:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:32.350 05:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:32.350 05:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:32.350 05:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:21:32.350 { 01:21:32.350 "cntlid": 113, 01:21:32.350 "qid": 0, 01:21:32.350 "state": "enabled", 01:21:32.350 "thread": "nvmf_tgt_poll_group_000", 01:21:32.350 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b", 01:21:32.350 "listen_address": { 01:21:32.350 "trtype": "TCP", 01:21:32.350 "adrfam": "IPv4", 01:21:32.350 "traddr": "10.0.0.3", 01:21:32.350 "trsvcid": "4420" 01:21:32.350 }, 01:21:32.350 "peer_address": { 01:21:32.350 "trtype": "TCP", 01:21:32.350 "adrfam": "IPv4", 01:21:32.350 "traddr": "10.0.0.1", 01:21:32.350 "trsvcid": "40880" 01:21:32.350 }, 01:21:32.350 "auth": { 01:21:32.350 "state": "completed", 01:21:32.350 "digest": "sha512", 01:21:32.350 "dhgroup": "ffdhe3072" 01:21:32.350 } 01:21:32.350 } 01:21:32.350 ]' 01:21:32.350 05:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:21:32.350 05:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:21:32.350 05:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:21:32.350 05:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 01:21:32.610 05:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:21:32.610 05:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:21:32.610 05:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:21:32.610 05:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:21:32.610 05:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzhmN2E2YWFmNzdlZDZjYjg3ODQyZjU4Y2I3NjM2OTNiMzFiMzhjMTQyZmNjZGYxzNn33Q==: --dhchap-ctrl-secret DHHC-1:03:YjVlM2M3ZmUyMjRlM2Q3ZGYxZDljNWQ2YzFlOGI5Yzg1NDJjOTA5NmQ4YTQyMmVhZjRiMTBlNmQyODVmOTg0OT7Mb6E=: 01:21:32.610 05:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid 0567fff2-ddaf-4a4f-877d-a2600d7e662b -l 0 --dhchap-secret DHHC-1:00:NzhmN2E2YWFmNzdlZDZjYjg3ODQyZjU4Y2I3NjM2OTNiMzFiMzhjMTQyZmNjZGYxzNn33Q==: --dhchap-ctrl-secret DHHC-1:03:YjVlM2M3ZmUyMjRlM2Q3ZGYxZDljNWQ2YzFlOGI5Yzg1NDJjOTA5NmQ4YTQyMmVhZjRiMTBlNmQyODVmOTg0OT7Mb6E=: 01:21:33.180 05:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:21:33.180 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:21:33.180 05:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:21:33.180 05:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:33.180 05:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:33.180 05:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:33.180 05:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:21:33.180 05:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:21:33.180 05:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:21:33.439 05:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 01:21:33.439 05:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:21:33.439 05:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:21:33.439 05:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 01:21:33.439 05:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 01:21:33.439 05:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:21:33.439 05:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:21:33.439 05:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:33.439 05:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:33.439 05:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:33.439 05:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:21:33.439 05:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:21:33.439 05:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:21:34.009 01:21:34.009 05:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:21:34.009 05:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:21:34.009 05:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:21:34.009 05:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:21:34.009 05:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:21:34.009 05:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:34.009 05:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:34.009 05:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:34.009 05:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:21:34.009 { 01:21:34.009 "cntlid": 115, 01:21:34.009 "qid": 0, 01:21:34.009 "state": "enabled", 01:21:34.009 "thread": "nvmf_tgt_poll_group_000", 01:21:34.009 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b", 01:21:34.009 "listen_address": { 01:21:34.009 "trtype": "TCP", 01:21:34.009 "adrfam": "IPv4", 01:21:34.009 "traddr": "10.0.0.3", 01:21:34.009 "trsvcid": "4420" 01:21:34.009 }, 01:21:34.009 "peer_address": { 01:21:34.009 "trtype": "TCP", 01:21:34.009 "adrfam": "IPv4", 01:21:34.009 "traddr": "10.0.0.1", 01:21:34.009 "trsvcid": "40906" 01:21:34.009 }, 01:21:34.009 "auth": { 01:21:34.009 "state": "completed", 01:21:34.009 "digest": "sha512", 01:21:34.009 "dhgroup": "ffdhe3072" 01:21:34.009 } 01:21:34.009 } 01:21:34.009 ]' 01:21:34.009 05:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:21:34.009 05:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:21:34.009 05:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:21:34.269 05:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 01:21:34.269 05:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:21:34.269 05:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:21:34.269 05:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:21:34.269 05:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:21:34.529 05:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjJiNGM5NTJjMTgzMWFlZWIxOWEwZmE0MWI2ZjYzYWFcYrja: --dhchap-ctrl-secret DHHC-1:02:MzJhMjI4Zjk0NmU0ZTdiZDk4MWQzYjdlZDg2ZGNmZDUxMDhmYTM4YTkzMmY1ZWJi1Ai+GA==: 01:21:34.529 05:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid 0567fff2-ddaf-4a4f-877d-a2600d7e662b -l 0 --dhchap-secret DHHC-1:01:NjJiNGM5NTJjMTgzMWFlZWIxOWEwZmE0MWI2ZjYzYWFcYrja: --dhchap-ctrl-secret DHHC-1:02:MzJhMjI4Zjk0NmU0ZTdiZDk4MWQzYjdlZDg2ZGNmZDUxMDhmYTM4YTkzMmY1ZWJi1Ai+GA==: 01:21:35.099 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:21:35.099 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:21:35.099 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:21:35.099 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:35.099 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:35.099 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:35.099 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:21:35.099 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:21:35.099 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:21:35.099 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 01:21:35.099 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:21:35.099 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:21:35.099 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 01:21:35.099 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 01:21:35.099 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:21:35.099 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:21:35.099 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:35.099 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:35.359 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:35.359 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:21:35.359 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:21:35.359 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:21:35.619 01:21:35.619 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:21:35.619 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:21:35.619 05:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:21:35.619 05:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:21:35.878 05:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:21:35.878 05:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:35.878 05:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:35.878 05:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:35.878 05:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:21:35.878 { 01:21:35.878 "cntlid": 117, 01:21:35.878 "qid": 0, 01:21:35.878 "state": "enabled", 01:21:35.878 "thread": "nvmf_tgt_poll_group_000", 01:21:35.878 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b", 01:21:35.878 "listen_address": { 01:21:35.878 "trtype": "TCP", 01:21:35.878 "adrfam": "IPv4", 01:21:35.878 "traddr": "10.0.0.3", 01:21:35.878 "trsvcid": "4420" 01:21:35.878 }, 01:21:35.878 "peer_address": { 01:21:35.878 "trtype": "TCP", 01:21:35.878 "adrfam": "IPv4", 01:21:35.878 "traddr": "10.0.0.1", 01:21:35.878 "trsvcid": "40934" 01:21:35.878 }, 01:21:35.878 "auth": { 01:21:35.878 "state": "completed", 01:21:35.878 "digest": "sha512", 01:21:35.878 "dhgroup": "ffdhe3072" 01:21:35.878 } 01:21:35.878 } 01:21:35.878 ]' 01:21:35.878 05:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:21:35.879 05:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:21:35.879 05:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:21:35.879 05:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 01:21:35.879 05:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:21:35.879 05:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:21:35.879 05:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:21:35.879 05:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:21:36.138 05:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzdjNGMxMDA2YjgxNjkwM2EwMDA4OTE1MTNmODhkOTRlODJhNGRjYWIzYWI3YmM5CzI+UQ==: --dhchap-ctrl-secret DHHC-1:01:MjZhMDE4N2E2Y2M1ZmIwMjE0OTNjNDkyMGZmOTI1MWFTSBIt: 01:21:36.138 05:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid 0567fff2-ddaf-4a4f-877d-a2600d7e662b -l 0 --dhchap-secret DHHC-1:02:MzdjNGMxMDA2YjgxNjkwM2EwMDA4OTE1MTNmODhkOTRlODJhNGRjYWIzYWI3YmM5CzI+UQ==: --dhchap-ctrl-secret DHHC-1:01:MjZhMDE4N2E2Y2M1ZmIwMjE0OTNjNDkyMGZmOTI1MWFTSBIt: 01:21:36.707 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:21:36.707 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:21:36.707 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:21:36.707 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:36.707 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:36.707 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:36.707 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:21:36.707 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:21:36.707 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:21:36.966 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 01:21:36.966 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:21:36.966 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:21:36.966 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 01:21:36.966 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 01:21:36.966 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:21:36.966 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key3 01:21:36.966 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:36.966 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:36.966 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:36.966 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 01:21:36.966 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:21:36.966 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:21:37.226 01:21:37.226 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:21:37.226 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:21:37.226 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:21:37.485 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:21:37.485 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:21:37.485 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:37.485 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:37.485 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:37.485 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:21:37.485 { 01:21:37.485 "cntlid": 119, 01:21:37.485 "qid": 0, 01:21:37.485 "state": "enabled", 01:21:37.485 "thread": "nvmf_tgt_poll_group_000", 01:21:37.485 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b", 01:21:37.485 "listen_address": { 01:21:37.485 "trtype": "TCP", 01:21:37.485 "adrfam": "IPv4", 01:21:37.485 "traddr": "10.0.0.3", 01:21:37.485 "trsvcid": "4420" 01:21:37.485 }, 01:21:37.485 "peer_address": { 01:21:37.485 "trtype": "TCP", 01:21:37.485 "adrfam": "IPv4", 01:21:37.485 "traddr": "10.0.0.1", 01:21:37.485 "trsvcid": "40972" 01:21:37.485 }, 01:21:37.485 "auth": { 01:21:37.485 "state": "completed", 01:21:37.485 "digest": "sha512", 01:21:37.485 "dhgroup": "ffdhe3072" 01:21:37.485 } 01:21:37.485 } 01:21:37.485 ]' 01:21:37.485 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:21:37.485 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:21:37.485 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:21:37.485 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 01:21:37.485 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:21:37.485 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:21:37.485 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:21:37.485 05:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:21:37.744 05:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzlmMWVmZDI2OWNkMWEwYmVkNjk1OGU1MDI2MDUwODM2YmRjOWFiZGZhNTJhMmZiNjQ1YjM3YWI4YzFjZmExZXKHK6Y=: 01:21:37.744 05:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid 0567fff2-ddaf-4a4f-877d-a2600d7e662b -l 0 --dhchap-secret DHHC-1:03:NzlmMWVmZDI2OWNkMWEwYmVkNjk1OGU1MDI2MDUwODM2YmRjOWFiZGZhNTJhMmZiNjQ1YjM3YWI4YzFjZmExZXKHK6Y=: 01:21:38.311 05:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:21:38.311 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:21:38.311 05:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:21:38.311 05:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:38.311 05:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:38.311 05:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:38.311 05:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 01:21:38.311 05:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:21:38.311 05:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:21:38.311 05:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:21:38.571 05:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 01:21:38.571 05:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:21:38.571 05:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:21:38.571 05:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 01:21:38.571 05:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 01:21:38.571 05:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:21:38.571 05:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:21:38.571 05:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:38.571 05:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:38.571 05:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:38.571 05:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:21:38.571 05:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:21:38.571 05:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:21:39.140 01:21:39.140 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:21:39.140 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:21:39.140 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:21:39.140 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:21:39.140 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:21:39.140 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:39.140 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:39.400 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:39.400 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:21:39.400 { 01:21:39.400 "cntlid": 121, 01:21:39.400 "qid": 0, 01:21:39.400 "state": "enabled", 01:21:39.400 "thread": "nvmf_tgt_poll_group_000", 01:21:39.400 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b", 01:21:39.400 "listen_address": { 01:21:39.400 "trtype": "TCP", 01:21:39.400 "adrfam": "IPv4", 01:21:39.400 "traddr": "10.0.0.3", 01:21:39.400 "trsvcid": "4420" 01:21:39.400 }, 01:21:39.400 "peer_address": { 01:21:39.400 "trtype": "TCP", 01:21:39.400 "adrfam": "IPv4", 01:21:39.400 "traddr": "10.0.0.1", 01:21:39.400 "trsvcid": "41014" 01:21:39.400 }, 01:21:39.400 "auth": { 01:21:39.400 "state": "completed", 01:21:39.400 "digest": "sha512", 01:21:39.400 "dhgroup": "ffdhe4096" 01:21:39.400 } 01:21:39.400 } 01:21:39.400 ]' 01:21:39.400 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:21:39.400 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:21:39.400 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:21:39.400 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 01:21:39.400 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:21:39.400 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:21:39.400 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:21:39.400 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:21:39.659 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzhmN2E2YWFmNzdlZDZjYjg3ODQyZjU4Y2I3NjM2OTNiMzFiMzhjMTQyZmNjZGYxzNn33Q==: --dhchap-ctrl-secret DHHC-1:03:YjVlM2M3ZmUyMjRlM2Q3ZGYxZDljNWQ2YzFlOGI5Yzg1NDJjOTA5NmQ4YTQyMmVhZjRiMTBlNmQyODVmOTg0OT7Mb6E=: 01:21:39.659 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid 0567fff2-ddaf-4a4f-877d-a2600d7e662b -l 0 --dhchap-secret DHHC-1:00:NzhmN2E2YWFmNzdlZDZjYjg3ODQyZjU4Y2I3NjM2OTNiMzFiMzhjMTQyZmNjZGYxzNn33Q==: --dhchap-ctrl-secret DHHC-1:03:YjVlM2M3ZmUyMjRlM2Q3ZGYxZDljNWQ2YzFlOGI5Yzg1NDJjOTA5NmQ4YTQyMmVhZjRiMTBlNmQyODVmOTg0OT7Mb6E=: 01:21:40.228 05:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:21:40.228 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:21:40.228 05:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:21:40.229 05:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:40.229 05:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:40.229 05:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:40.229 05:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:21:40.229 05:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:21:40.229 05:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:21:40.488 05:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 01:21:40.488 05:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:21:40.488 05:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:21:40.489 05:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 01:21:40.489 05:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 01:21:40.489 05:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:21:40.489 05:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:21:40.489 05:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:40.489 05:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:40.489 05:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:40.489 05:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:21:40.489 05:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:21:40.489 05:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:21:40.748 01:21:40.748 05:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:21:40.748 05:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:21:40.748 05:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:21:41.008 05:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:21:41.008 05:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:21:41.008 05:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:41.008 05:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:41.008 05:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:41.008 05:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:21:41.008 { 01:21:41.008 "cntlid": 123, 01:21:41.008 "qid": 0, 01:21:41.008 "state": "enabled", 01:21:41.008 "thread": "nvmf_tgt_poll_group_000", 01:21:41.008 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b", 01:21:41.008 "listen_address": { 01:21:41.008 "trtype": "TCP", 01:21:41.008 "adrfam": "IPv4", 01:21:41.008 "traddr": "10.0.0.3", 01:21:41.008 "trsvcid": "4420" 01:21:41.008 }, 01:21:41.008 "peer_address": { 01:21:41.008 "trtype": "TCP", 01:21:41.008 "adrfam": "IPv4", 01:21:41.009 "traddr": "10.0.0.1", 01:21:41.009 "trsvcid": "49354" 01:21:41.009 }, 01:21:41.009 "auth": { 01:21:41.009 "state": "completed", 01:21:41.009 "digest": "sha512", 01:21:41.009 "dhgroup": "ffdhe4096" 01:21:41.009 } 01:21:41.009 } 01:21:41.009 ]' 01:21:41.009 05:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:21:41.009 05:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:21:41.009 05:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:21:41.009 05:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 01:21:41.009 05:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:21:41.268 05:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:21:41.268 05:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:21:41.268 05:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:21:41.528 05:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjJiNGM5NTJjMTgzMWFlZWIxOWEwZmE0MWI2ZjYzYWFcYrja: --dhchap-ctrl-secret DHHC-1:02:MzJhMjI4Zjk0NmU0ZTdiZDk4MWQzYjdlZDg2ZGNmZDUxMDhmYTM4YTkzMmY1ZWJi1Ai+GA==: 01:21:41.528 05:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid 0567fff2-ddaf-4a4f-877d-a2600d7e662b -l 0 --dhchap-secret DHHC-1:01:NjJiNGM5NTJjMTgzMWFlZWIxOWEwZmE0MWI2ZjYzYWFcYrja: --dhchap-ctrl-secret DHHC-1:02:MzJhMjI4Zjk0NmU0ZTdiZDk4MWQzYjdlZDg2ZGNmZDUxMDhmYTM4YTkzMmY1ZWJi1Ai+GA==: 01:21:42.097 05:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:21:42.097 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:21:42.097 05:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:21:42.097 05:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:42.097 05:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:42.097 05:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:42.097 05:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:21:42.097 05:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:21:42.097 05:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:21:42.097 05:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 01:21:42.097 05:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:21:42.097 05:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:21:42.097 05:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 01:21:42.098 05:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 01:21:42.098 05:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:21:42.098 05:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:21:42.098 05:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:42.098 05:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:42.098 05:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:42.098 05:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:21:42.098 05:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:21:42.098 05:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:21:42.666 01:21:42.666 05:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:21:42.667 05:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:21:42.667 05:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:21:42.667 05:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:21:42.667 05:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:21:42.667 05:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:42.667 05:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:42.667 05:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:42.667 05:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:21:42.667 { 01:21:42.667 "cntlid": 125, 01:21:42.667 "qid": 0, 01:21:42.667 "state": "enabled", 01:21:42.667 "thread": "nvmf_tgt_poll_group_000", 01:21:42.667 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b", 01:21:42.667 "listen_address": { 01:21:42.667 "trtype": "TCP", 01:21:42.667 "adrfam": "IPv4", 01:21:42.667 "traddr": "10.0.0.3", 01:21:42.667 "trsvcid": "4420" 01:21:42.667 }, 01:21:42.667 "peer_address": { 01:21:42.667 "trtype": "TCP", 01:21:42.667 "adrfam": "IPv4", 01:21:42.667 "traddr": "10.0.0.1", 01:21:42.667 "trsvcid": "49384" 01:21:42.667 }, 01:21:42.667 "auth": { 01:21:42.667 "state": "completed", 01:21:42.667 "digest": "sha512", 01:21:42.667 "dhgroup": "ffdhe4096" 01:21:42.667 } 01:21:42.667 } 01:21:42.667 ]' 01:21:42.667 05:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:21:42.926 05:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:21:42.926 05:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:21:42.926 05:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 01:21:42.926 05:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:21:42.926 05:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:21:42.926 05:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:21:42.926 05:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:21:43.185 05:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzdjNGMxMDA2YjgxNjkwM2EwMDA4OTE1MTNmODhkOTRlODJhNGRjYWIzYWI3YmM5CzI+UQ==: --dhchap-ctrl-secret DHHC-1:01:MjZhMDE4N2E2Y2M1ZmIwMjE0OTNjNDkyMGZmOTI1MWFTSBIt: 01:21:43.185 05:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid 0567fff2-ddaf-4a4f-877d-a2600d7e662b -l 0 --dhchap-secret DHHC-1:02:MzdjNGMxMDA2YjgxNjkwM2EwMDA4OTE1MTNmODhkOTRlODJhNGRjYWIzYWI3YmM5CzI+UQ==: --dhchap-ctrl-secret DHHC-1:01:MjZhMDE4N2E2Y2M1ZmIwMjE0OTNjNDkyMGZmOTI1MWFTSBIt: 01:21:43.796 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:21:43.796 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:21:43.796 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:21:43.796 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:43.796 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:43.796 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:43.796 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:21:43.796 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:21:43.796 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:21:44.055 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 01:21:44.055 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:21:44.055 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:21:44.055 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 01:21:44.055 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 01:21:44.055 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:21:44.055 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key3 01:21:44.055 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:44.055 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:44.055 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:44.055 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 01:21:44.055 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:21:44.055 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:21:44.315 01:21:44.315 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:21:44.315 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:21:44.315 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:21:44.574 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:21:44.574 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:21:44.574 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:44.574 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:44.574 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:44.574 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:21:44.574 { 01:21:44.574 "cntlid": 127, 01:21:44.574 "qid": 0, 01:21:44.574 "state": "enabled", 01:21:44.574 "thread": "nvmf_tgt_poll_group_000", 01:21:44.574 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b", 01:21:44.574 "listen_address": { 01:21:44.574 "trtype": "TCP", 01:21:44.574 "adrfam": "IPv4", 01:21:44.574 "traddr": "10.0.0.3", 01:21:44.574 "trsvcid": "4420" 01:21:44.574 }, 01:21:44.574 "peer_address": { 01:21:44.574 "trtype": "TCP", 01:21:44.574 "adrfam": "IPv4", 01:21:44.574 "traddr": "10.0.0.1", 01:21:44.574 "trsvcid": "49408" 01:21:44.574 }, 01:21:44.574 "auth": { 01:21:44.574 "state": "completed", 01:21:44.574 "digest": "sha512", 01:21:44.574 "dhgroup": "ffdhe4096" 01:21:44.574 } 01:21:44.574 } 01:21:44.574 ]' 01:21:44.574 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:21:44.574 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:21:44.574 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:21:44.574 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 01:21:44.574 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:21:44.574 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:21:44.574 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:21:44.574 05:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:21:44.833 05:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzlmMWVmZDI2OWNkMWEwYmVkNjk1OGU1MDI2MDUwODM2YmRjOWFiZGZhNTJhMmZiNjQ1YjM3YWI4YzFjZmExZXKHK6Y=: 01:21:44.833 05:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid 0567fff2-ddaf-4a4f-877d-a2600d7e662b -l 0 --dhchap-secret DHHC-1:03:NzlmMWVmZDI2OWNkMWEwYmVkNjk1OGU1MDI2MDUwODM2YmRjOWFiZGZhNTJhMmZiNjQ1YjM3YWI4YzFjZmExZXKHK6Y=: 01:21:45.403 05:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:21:45.403 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:21:45.403 05:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:21:45.403 05:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:45.662 05:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:45.662 05:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:45.662 05:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 01:21:45.662 05:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:21:45.662 05:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:21:45.662 05:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:21:45.662 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 01:21:45.662 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:21:45.662 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:21:45.662 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 01:21:45.662 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 01:21:45.662 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:21:45.662 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:21:45.662 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:45.662 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:45.940 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:45.940 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:21:45.940 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:21:45.940 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:21:46.199 01:21:46.199 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:21:46.199 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:21:46.200 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:21:46.459 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:21:46.459 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:21:46.459 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:46.459 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:46.459 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:46.459 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:21:46.459 { 01:21:46.459 "cntlid": 129, 01:21:46.459 "qid": 0, 01:21:46.459 "state": "enabled", 01:21:46.459 "thread": "nvmf_tgt_poll_group_000", 01:21:46.459 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b", 01:21:46.459 "listen_address": { 01:21:46.459 "trtype": "TCP", 01:21:46.459 "adrfam": "IPv4", 01:21:46.459 "traddr": "10.0.0.3", 01:21:46.459 "trsvcid": "4420" 01:21:46.459 }, 01:21:46.459 "peer_address": { 01:21:46.459 "trtype": "TCP", 01:21:46.459 "adrfam": "IPv4", 01:21:46.459 "traddr": "10.0.0.1", 01:21:46.459 "trsvcid": "49434" 01:21:46.459 }, 01:21:46.459 "auth": { 01:21:46.459 "state": "completed", 01:21:46.459 "digest": "sha512", 01:21:46.459 "dhgroup": "ffdhe6144" 01:21:46.459 } 01:21:46.459 } 01:21:46.459 ]' 01:21:46.459 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:21:46.459 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:21:46.459 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:21:46.459 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 01:21:46.459 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:21:46.459 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:21:46.459 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:21:46.459 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:21:46.718 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzhmN2E2YWFmNzdlZDZjYjg3ODQyZjU4Y2I3NjM2OTNiMzFiMzhjMTQyZmNjZGYxzNn33Q==: --dhchap-ctrl-secret DHHC-1:03:YjVlM2M3ZmUyMjRlM2Q3ZGYxZDljNWQ2YzFlOGI5Yzg1NDJjOTA5NmQ4YTQyMmVhZjRiMTBlNmQyODVmOTg0OT7Mb6E=: 01:21:46.718 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid 0567fff2-ddaf-4a4f-877d-a2600d7e662b -l 0 --dhchap-secret DHHC-1:00:NzhmN2E2YWFmNzdlZDZjYjg3ODQyZjU4Y2I3NjM2OTNiMzFiMzhjMTQyZmNjZGYxzNn33Q==: --dhchap-ctrl-secret DHHC-1:03:YjVlM2M3ZmUyMjRlM2Q3ZGYxZDljNWQ2YzFlOGI5Yzg1NDJjOTA5NmQ4YTQyMmVhZjRiMTBlNmQyODVmOTg0OT7Mb6E=: 01:21:47.285 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:21:47.285 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:21:47.285 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:21:47.285 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:47.285 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:47.285 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:47.285 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:21:47.285 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:21:47.285 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:21:47.545 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 01:21:47.545 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:21:47.545 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:21:47.545 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 01:21:47.545 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 01:21:47.545 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:21:47.545 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:21:47.545 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:47.545 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:47.545 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:47.545 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:21:47.545 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:21:47.546 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:21:48.114 01:21:48.114 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:21:48.114 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:21:48.114 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:21:48.373 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:21:48.373 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:21:48.373 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:48.373 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:48.373 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:48.373 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:21:48.373 { 01:21:48.374 "cntlid": 131, 01:21:48.374 "qid": 0, 01:21:48.374 "state": "enabled", 01:21:48.374 "thread": "nvmf_tgt_poll_group_000", 01:21:48.374 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b", 01:21:48.374 "listen_address": { 01:21:48.374 "trtype": "TCP", 01:21:48.374 "adrfam": "IPv4", 01:21:48.374 "traddr": "10.0.0.3", 01:21:48.374 "trsvcid": "4420" 01:21:48.374 }, 01:21:48.374 "peer_address": { 01:21:48.374 "trtype": "TCP", 01:21:48.374 "adrfam": "IPv4", 01:21:48.374 "traddr": "10.0.0.1", 01:21:48.374 "trsvcid": "49450" 01:21:48.374 }, 01:21:48.374 "auth": { 01:21:48.374 "state": "completed", 01:21:48.374 "digest": "sha512", 01:21:48.374 "dhgroup": "ffdhe6144" 01:21:48.374 } 01:21:48.374 } 01:21:48.374 ]' 01:21:48.374 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:21:48.374 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:21:48.374 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:21:48.374 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 01:21:48.374 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:21:48.374 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:21:48.374 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:21:48.374 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:21:48.632 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjJiNGM5NTJjMTgzMWFlZWIxOWEwZmE0MWI2ZjYzYWFcYrja: --dhchap-ctrl-secret DHHC-1:02:MzJhMjI4Zjk0NmU0ZTdiZDk4MWQzYjdlZDg2ZGNmZDUxMDhmYTM4YTkzMmY1ZWJi1Ai+GA==: 01:21:48.632 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid 0567fff2-ddaf-4a4f-877d-a2600d7e662b -l 0 --dhchap-secret DHHC-1:01:NjJiNGM5NTJjMTgzMWFlZWIxOWEwZmE0MWI2ZjYzYWFcYrja: --dhchap-ctrl-secret DHHC-1:02:MzJhMjI4Zjk0NmU0ZTdiZDk4MWQzYjdlZDg2ZGNmZDUxMDhmYTM4YTkzMmY1ZWJi1Ai+GA==: 01:21:49.200 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:21:49.200 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:21:49.200 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:21:49.200 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:49.200 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:49.200 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:49.200 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:21:49.200 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:21:49.200 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:21:49.459 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 01:21:49.459 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:21:49.459 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:21:49.459 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 01:21:49.459 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 01:21:49.459 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:21:49.459 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:21:49.459 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:49.459 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:49.459 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:49.459 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:21:49.459 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:21:49.459 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:21:50.027 01:21:50.027 05:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:21:50.027 05:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:21:50.027 05:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:21:50.027 05:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:21:50.027 05:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:21:50.027 05:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:50.027 05:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:50.027 05:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:50.027 05:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:21:50.027 { 01:21:50.027 "cntlid": 133, 01:21:50.027 "qid": 0, 01:21:50.027 "state": "enabled", 01:21:50.027 "thread": "nvmf_tgt_poll_group_000", 01:21:50.027 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b", 01:21:50.027 "listen_address": { 01:21:50.027 "trtype": "TCP", 01:21:50.027 "adrfam": "IPv4", 01:21:50.027 "traddr": "10.0.0.3", 01:21:50.027 "trsvcid": "4420" 01:21:50.027 }, 01:21:50.027 "peer_address": { 01:21:50.027 "trtype": "TCP", 01:21:50.027 "adrfam": "IPv4", 01:21:50.027 "traddr": "10.0.0.1", 01:21:50.027 "trsvcid": "49488" 01:21:50.027 }, 01:21:50.027 "auth": { 01:21:50.027 "state": "completed", 01:21:50.027 "digest": "sha512", 01:21:50.027 "dhgroup": "ffdhe6144" 01:21:50.027 } 01:21:50.027 } 01:21:50.027 ]' 01:21:50.027 05:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:21:50.027 05:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:21:50.028 05:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:21:50.287 05:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 01:21:50.287 05:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:21:50.287 05:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:21:50.287 05:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:21:50.287 05:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:21:50.547 05:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzdjNGMxMDA2YjgxNjkwM2EwMDA4OTE1MTNmODhkOTRlODJhNGRjYWIzYWI3YmM5CzI+UQ==: --dhchap-ctrl-secret DHHC-1:01:MjZhMDE4N2E2Y2M1ZmIwMjE0OTNjNDkyMGZmOTI1MWFTSBIt: 01:21:50.547 05:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid 0567fff2-ddaf-4a4f-877d-a2600d7e662b -l 0 --dhchap-secret DHHC-1:02:MzdjNGMxMDA2YjgxNjkwM2EwMDA4OTE1MTNmODhkOTRlODJhNGRjYWIzYWI3YmM5CzI+UQ==: --dhchap-ctrl-secret DHHC-1:01:MjZhMDE4N2E2Y2M1ZmIwMjE0OTNjNDkyMGZmOTI1MWFTSBIt: 01:21:51.119 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:21:51.119 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:21:51.119 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:21:51.119 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:51.119 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:51.119 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:51.119 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:21:51.119 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:21:51.119 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:21:51.119 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 01:21:51.119 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:21:51.119 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:21:51.119 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 01:21:51.119 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 01:21:51.119 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:21:51.119 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key3 01:21:51.119 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:51.119 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:51.119 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:51.119 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 01:21:51.119 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:21:51.119 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:21:51.701 01:21:51.701 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:21:51.701 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:21:51.701 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:21:51.961 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:21:51.961 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:21:51.961 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:51.961 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:51.961 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:51.961 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:21:51.961 { 01:21:51.961 "cntlid": 135, 01:21:51.961 "qid": 0, 01:21:51.961 "state": "enabled", 01:21:51.961 "thread": "nvmf_tgt_poll_group_000", 01:21:51.961 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b", 01:21:51.961 "listen_address": { 01:21:51.961 "trtype": "TCP", 01:21:51.961 "adrfam": "IPv4", 01:21:51.961 "traddr": "10.0.0.3", 01:21:51.961 "trsvcid": "4420" 01:21:51.961 }, 01:21:51.961 "peer_address": { 01:21:51.961 "trtype": "TCP", 01:21:51.961 "adrfam": "IPv4", 01:21:51.961 "traddr": "10.0.0.1", 01:21:51.961 "trsvcid": "53904" 01:21:51.961 }, 01:21:51.961 "auth": { 01:21:51.961 "state": "completed", 01:21:51.961 "digest": "sha512", 01:21:51.961 "dhgroup": "ffdhe6144" 01:21:51.961 } 01:21:51.961 } 01:21:51.961 ]' 01:21:51.961 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:21:51.961 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:21:51.961 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:21:51.961 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 01:21:51.961 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:21:51.961 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:21:51.961 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:21:51.961 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:21:52.219 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzlmMWVmZDI2OWNkMWEwYmVkNjk1OGU1MDI2MDUwODM2YmRjOWFiZGZhNTJhMmZiNjQ1YjM3YWI4YzFjZmExZXKHK6Y=: 01:21:52.219 05:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid 0567fff2-ddaf-4a4f-877d-a2600d7e662b -l 0 --dhchap-secret DHHC-1:03:NzlmMWVmZDI2OWNkMWEwYmVkNjk1OGU1MDI2MDUwODM2YmRjOWFiZGZhNTJhMmZiNjQ1YjM3YWI4YzFjZmExZXKHK6Y=: 01:21:52.787 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:21:52.787 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:21:52.787 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:21:52.787 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:52.787 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:52.787 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:52.787 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 01:21:52.787 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:21:52.787 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:21:52.787 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:21:53.047 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 01:21:53.047 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:21:53.047 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:21:53.047 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 01:21:53.047 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 01:21:53.047 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:21:53.047 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:21:53.047 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:53.047 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:53.047 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:53.047 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:21:53.047 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:21:53.047 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:21:53.616 01:21:53.616 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:21:53.616 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:21:53.616 05:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:21:53.876 05:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:21:53.876 05:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:21:53.876 05:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:53.876 05:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:53.876 05:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:53.876 05:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:21:53.876 { 01:21:53.876 "cntlid": 137, 01:21:53.876 "qid": 0, 01:21:53.876 "state": "enabled", 01:21:53.876 "thread": "nvmf_tgt_poll_group_000", 01:21:53.876 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b", 01:21:53.876 "listen_address": { 01:21:53.876 "trtype": "TCP", 01:21:53.876 "adrfam": "IPv4", 01:21:53.876 "traddr": "10.0.0.3", 01:21:53.876 "trsvcid": "4420" 01:21:53.876 }, 01:21:53.876 "peer_address": { 01:21:53.876 "trtype": "TCP", 01:21:53.876 "adrfam": "IPv4", 01:21:53.876 "traddr": "10.0.0.1", 01:21:53.876 "trsvcid": "53926" 01:21:53.876 }, 01:21:53.876 "auth": { 01:21:53.876 "state": "completed", 01:21:53.876 "digest": "sha512", 01:21:53.876 "dhgroup": "ffdhe8192" 01:21:53.876 } 01:21:53.876 } 01:21:53.876 ]' 01:21:53.876 05:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:21:53.876 05:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:21:53.876 05:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:21:53.876 05:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 01:21:53.876 05:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:21:53.876 05:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:21:53.876 05:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:21:53.876 05:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:21:54.136 05:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzhmN2E2YWFmNzdlZDZjYjg3ODQyZjU4Y2I3NjM2OTNiMzFiMzhjMTQyZmNjZGYxzNn33Q==: --dhchap-ctrl-secret DHHC-1:03:YjVlM2M3ZmUyMjRlM2Q3ZGYxZDljNWQ2YzFlOGI5Yzg1NDJjOTA5NmQ4YTQyMmVhZjRiMTBlNmQyODVmOTg0OT7Mb6E=: 01:21:54.136 05:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid 0567fff2-ddaf-4a4f-877d-a2600d7e662b -l 0 --dhchap-secret DHHC-1:00:NzhmN2E2YWFmNzdlZDZjYjg3ODQyZjU4Y2I3NjM2OTNiMzFiMzhjMTQyZmNjZGYxzNn33Q==: --dhchap-ctrl-secret DHHC-1:03:YjVlM2M3ZmUyMjRlM2Q3ZGYxZDljNWQ2YzFlOGI5Yzg1NDJjOTA5NmQ4YTQyMmVhZjRiMTBlNmQyODVmOTg0OT7Mb6E=: 01:21:54.705 05:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:21:54.705 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:21:54.705 05:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:21:54.705 05:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:54.705 05:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:54.705 05:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:54.705 05:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:21:54.705 05:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:21:54.705 05:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:21:54.965 05:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 01:21:54.965 05:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:21:54.965 05:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:21:54.965 05:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 01:21:54.965 05:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 01:21:54.965 05:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:21:54.965 05:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:21:54.965 05:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:54.965 05:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:54.965 05:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:54.965 05:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:21:54.965 05:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:21:54.965 05:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:21:55.534 01:21:55.534 05:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:21:55.534 05:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:21:55.534 05:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:21:55.794 05:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:21:55.794 05:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:21:55.794 05:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:55.794 05:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:55.794 05:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:55.794 05:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:21:55.794 { 01:21:55.794 "cntlid": 139, 01:21:55.794 "qid": 0, 01:21:55.794 "state": "enabled", 01:21:55.794 "thread": "nvmf_tgt_poll_group_000", 01:21:55.794 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b", 01:21:55.794 "listen_address": { 01:21:55.794 "trtype": "TCP", 01:21:55.794 "adrfam": "IPv4", 01:21:55.794 "traddr": "10.0.0.3", 01:21:55.794 "trsvcid": "4420" 01:21:55.794 }, 01:21:55.794 "peer_address": { 01:21:55.794 "trtype": "TCP", 01:21:55.794 "adrfam": "IPv4", 01:21:55.794 "traddr": "10.0.0.1", 01:21:55.794 "trsvcid": "53944" 01:21:55.794 }, 01:21:55.794 "auth": { 01:21:55.794 "state": "completed", 01:21:55.794 "digest": "sha512", 01:21:55.794 "dhgroup": "ffdhe8192" 01:21:55.794 } 01:21:55.794 } 01:21:55.794 ]' 01:21:55.794 05:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:21:55.794 05:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:21:55.794 05:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:21:55.794 05:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 01:21:55.794 05:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:21:55.794 05:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:21:55.794 05:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:21:55.794 05:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:21:56.054 05:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjJiNGM5NTJjMTgzMWFlZWIxOWEwZmE0MWI2ZjYzYWFcYrja: --dhchap-ctrl-secret DHHC-1:02:MzJhMjI4Zjk0NmU0ZTdiZDk4MWQzYjdlZDg2ZGNmZDUxMDhmYTM4YTkzMmY1ZWJi1Ai+GA==: 01:21:56.054 05:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid 0567fff2-ddaf-4a4f-877d-a2600d7e662b -l 0 --dhchap-secret DHHC-1:01:NjJiNGM5NTJjMTgzMWFlZWIxOWEwZmE0MWI2ZjYzYWFcYrja: --dhchap-ctrl-secret DHHC-1:02:MzJhMjI4Zjk0NmU0ZTdiZDk4MWQzYjdlZDg2ZGNmZDUxMDhmYTM4YTkzMmY1ZWJi1Ai+GA==: 01:21:56.623 05:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:21:56.623 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:21:56.623 05:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:21:56.623 05:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:56.623 05:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:56.623 05:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:56.623 05:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:21:56.623 05:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:21:56.623 05:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:21:56.882 05:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 01:21:56.882 05:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:21:56.882 05:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:21:56.882 05:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 01:21:56.882 05:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 01:21:56.882 05:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:21:56.882 05:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:21:56.882 05:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:56.882 05:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:56.882 05:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:56.882 05:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:21:56.882 05:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:21:56.882 05:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:21:57.449 01:21:57.449 05:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:21:57.449 05:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:21:57.449 05:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:21:57.707 05:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:21:57.707 05:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:21:57.707 05:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:57.707 05:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:57.708 05:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:57.708 05:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:21:57.708 { 01:21:57.708 "cntlid": 141, 01:21:57.708 "qid": 0, 01:21:57.708 "state": "enabled", 01:21:57.708 "thread": "nvmf_tgt_poll_group_000", 01:21:57.708 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b", 01:21:57.708 "listen_address": { 01:21:57.708 "trtype": "TCP", 01:21:57.708 "adrfam": "IPv4", 01:21:57.708 "traddr": "10.0.0.3", 01:21:57.708 "trsvcid": "4420" 01:21:57.708 }, 01:21:57.708 "peer_address": { 01:21:57.708 "trtype": "TCP", 01:21:57.708 "adrfam": "IPv4", 01:21:57.708 "traddr": "10.0.0.1", 01:21:57.708 "trsvcid": "53974" 01:21:57.708 }, 01:21:57.708 "auth": { 01:21:57.708 "state": "completed", 01:21:57.708 "digest": "sha512", 01:21:57.708 "dhgroup": "ffdhe8192" 01:21:57.708 } 01:21:57.708 } 01:21:57.708 ]' 01:21:57.708 05:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:21:57.708 05:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:21:57.708 05:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:21:57.708 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 01:21:57.708 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:21:57.708 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:21:57.708 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:21:57.708 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:21:57.966 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzdjNGMxMDA2YjgxNjkwM2EwMDA4OTE1MTNmODhkOTRlODJhNGRjYWIzYWI3YmM5CzI+UQ==: --dhchap-ctrl-secret DHHC-1:01:MjZhMDE4N2E2Y2M1ZmIwMjE0OTNjNDkyMGZmOTI1MWFTSBIt: 01:21:57.966 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid 0567fff2-ddaf-4a4f-877d-a2600d7e662b -l 0 --dhchap-secret DHHC-1:02:MzdjNGMxMDA2YjgxNjkwM2EwMDA4OTE1MTNmODhkOTRlODJhNGRjYWIzYWI3YmM5CzI+UQ==: --dhchap-ctrl-secret DHHC-1:01:MjZhMDE4N2E2Y2M1ZmIwMjE0OTNjNDkyMGZmOTI1MWFTSBIt: 01:21:58.540 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:21:58.540 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:21:58.540 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:21:58.540 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:58.540 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:58.540 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:58.540 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:21:58.540 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:21:58.540 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:21:58.799 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 01:21:58.799 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:21:58.799 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:21:58.799 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 01:21:58.799 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 01:21:58.799 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:21:58.799 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key3 01:21:58.799 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:58.799 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:58.799 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:58.799 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 01:21:58.799 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:21:58.799 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:21:59.367 01:21:59.367 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:21:59.367 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:21:59.367 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:21:59.627 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:21:59.627 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:21:59.627 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:21:59.627 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:21:59.627 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:21:59.627 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:21:59.627 { 01:21:59.627 "cntlid": 143, 01:21:59.627 "qid": 0, 01:21:59.627 "state": "enabled", 01:21:59.627 "thread": "nvmf_tgt_poll_group_000", 01:21:59.627 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b", 01:21:59.627 "listen_address": { 01:21:59.627 "trtype": "TCP", 01:21:59.627 "adrfam": "IPv4", 01:21:59.627 "traddr": "10.0.0.3", 01:21:59.627 "trsvcid": "4420" 01:21:59.627 }, 01:21:59.627 "peer_address": { 01:21:59.627 "trtype": "TCP", 01:21:59.627 "adrfam": "IPv4", 01:21:59.627 "traddr": "10.0.0.1", 01:21:59.627 "trsvcid": "53986" 01:21:59.627 }, 01:21:59.627 "auth": { 01:21:59.627 "state": "completed", 01:21:59.627 "digest": "sha512", 01:21:59.627 "dhgroup": "ffdhe8192" 01:21:59.627 } 01:21:59.627 } 01:21:59.627 ]' 01:21:59.627 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:21:59.627 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:21:59.627 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:21:59.627 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 01:21:59.627 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:21:59.627 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:21:59.627 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:21:59.627 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:21:59.887 05:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzlmMWVmZDI2OWNkMWEwYmVkNjk1OGU1MDI2MDUwODM2YmRjOWFiZGZhNTJhMmZiNjQ1YjM3YWI4YzFjZmExZXKHK6Y=: 01:21:59.887 05:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid 0567fff2-ddaf-4a4f-877d-a2600d7e662b -l 0 --dhchap-secret DHHC-1:03:NzlmMWVmZDI2OWNkMWEwYmVkNjk1OGU1MDI2MDUwODM2YmRjOWFiZGZhNTJhMmZiNjQ1YjM3YWI4YzFjZmExZXKHK6Y=: 01:22:00.457 05:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:22:00.457 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:22:00.457 05:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:22:00.457 05:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:00.457 05:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:00.457 05:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:00.457 05:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 01:22:00.457 05:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 01:22:00.457 05:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 01:22:00.457 05:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 01:22:00.457 05:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 01:22:00.457 05:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 01:22:00.717 05:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 01:22:00.717 05:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:22:00.717 05:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:22:00.717 05:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 01:22:00.717 05:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 01:22:00.717 05:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:22:00.717 05:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:22:00.717 05:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:00.717 05:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:00.717 05:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:00.717 05:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:22:00.717 05:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:22:00.717 05:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:22:01.288 01:22:01.288 05:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:22:01.288 05:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:22:01.288 05:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:22:01.548 05:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:22:01.548 05:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:22:01.548 05:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:01.548 05:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:01.548 05:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:01.548 05:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:22:01.548 { 01:22:01.548 "cntlid": 145, 01:22:01.548 "qid": 0, 01:22:01.548 "state": "enabled", 01:22:01.548 "thread": "nvmf_tgt_poll_group_000", 01:22:01.548 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b", 01:22:01.548 "listen_address": { 01:22:01.548 "trtype": "TCP", 01:22:01.548 "adrfam": "IPv4", 01:22:01.548 "traddr": "10.0.0.3", 01:22:01.548 "trsvcid": "4420" 01:22:01.548 }, 01:22:01.548 "peer_address": { 01:22:01.548 "trtype": "TCP", 01:22:01.548 "adrfam": "IPv4", 01:22:01.548 "traddr": "10.0.0.1", 01:22:01.548 "trsvcid": "49184" 01:22:01.548 }, 01:22:01.548 "auth": { 01:22:01.548 "state": "completed", 01:22:01.548 "digest": "sha512", 01:22:01.548 "dhgroup": "ffdhe8192" 01:22:01.548 } 01:22:01.548 } 01:22:01.548 ]' 01:22:01.548 05:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:22:01.548 05:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:22:01.548 05:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:22:01.548 05:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 01:22:01.548 05:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:22:01.548 05:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:22:01.548 05:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:22:01.548 05:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:22:01.808 05:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzhmN2E2YWFmNzdlZDZjYjg3ODQyZjU4Y2I3NjM2OTNiMzFiMzhjMTQyZmNjZGYxzNn33Q==: --dhchap-ctrl-secret DHHC-1:03:YjVlM2M3ZmUyMjRlM2Q3ZGYxZDljNWQ2YzFlOGI5Yzg1NDJjOTA5NmQ4YTQyMmVhZjRiMTBlNmQyODVmOTg0OT7Mb6E=: 01:22:01.808 05:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid 0567fff2-ddaf-4a4f-877d-a2600d7e662b -l 0 --dhchap-secret DHHC-1:00:NzhmN2E2YWFmNzdlZDZjYjg3ODQyZjU4Y2I3NjM2OTNiMzFiMzhjMTQyZmNjZGYxzNn33Q==: --dhchap-ctrl-secret DHHC-1:03:YjVlM2M3ZmUyMjRlM2Q3ZGYxZDljNWQ2YzFlOGI5Yzg1NDJjOTA5NmQ4YTQyMmVhZjRiMTBlNmQyODVmOTg0OT7Mb6E=: 01:22:02.378 05:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:22:02.378 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:22:02.378 05:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:22:02.378 05:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:02.378 05:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:02.378 05:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:02.378 05:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key1 01:22:02.378 05:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:02.378 05:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:02.378 05:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:02.378 05:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 01:22:02.378 05:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 01:22:02.378 05:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 01:22:02.378 05:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 01:22:02.378 05:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:22:02.378 05:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 01:22:02.378 05:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:22:02.378 05:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 01:22:02.378 05:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 01:22:02.378 05:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 01:22:02.948 request: 01:22:02.948 { 01:22:02.948 "name": "nvme0", 01:22:02.948 "trtype": "tcp", 01:22:02.948 "traddr": "10.0.0.3", 01:22:02.948 "adrfam": "ipv4", 01:22:02.948 "trsvcid": "4420", 01:22:02.948 "subnqn": "nqn.2024-03.io.spdk:cnode0", 01:22:02.948 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b", 01:22:02.948 "prchk_reftag": false, 01:22:02.948 "prchk_guard": false, 01:22:02.948 "hdgst": false, 01:22:02.948 "ddgst": false, 01:22:02.948 "dhchap_key": "key2", 01:22:02.948 "allow_unrecognized_csi": false, 01:22:02.948 "method": "bdev_nvme_attach_controller", 01:22:02.948 "req_id": 1 01:22:02.948 } 01:22:02.948 Got JSON-RPC error response 01:22:02.948 response: 01:22:02.948 { 01:22:02.948 "code": -5, 01:22:02.948 "message": "Input/output error" 01:22:02.948 } 01:22:02.948 05:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 01:22:02.948 05:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:22:02.948 05:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:22:02.948 05:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:22:02.948 05:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:22:02.948 05:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:02.948 05:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:02.948 05:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:02.948 05:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:22:02.948 05:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:02.948 05:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:02.948 05:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:02.948 05:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 01:22:02.948 05:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 01:22:02.948 05:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 01:22:02.948 05:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 01:22:02.948 05:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:22:02.948 05:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 01:22:02.948 05:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:22:02.948 05:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 01:22:02.948 05:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 01:22:02.948 05:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 01:22:03.516 request: 01:22:03.516 { 01:22:03.516 "name": "nvme0", 01:22:03.516 "trtype": "tcp", 01:22:03.516 "traddr": "10.0.0.3", 01:22:03.516 "adrfam": "ipv4", 01:22:03.516 "trsvcid": "4420", 01:22:03.516 "subnqn": "nqn.2024-03.io.spdk:cnode0", 01:22:03.516 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b", 01:22:03.516 "prchk_reftag": false, 01:22:03.516 "prchk_guard": false, 01:22:03.516 "hdgst": false, 01:22:03.516 "ddgst": false, 01:22:03.516 "dhchap_key": "key1", 01:22:03.516 "dhchap_ctrlr_key": "ckey2", 01:22:03.516 "allow_unrecognized_csi": false, 01:22:03.516 "method": "bdev_nvme_attach_controller", 01:22:03.516 "req_id": 1 01:22:03.516 } 01:22:03.516 Got JSON-RPC error response 01:22:03.516 response: 01:22:03.516 { 01:22:03.516 "code": -5, 01:22:03.516 "message": "Input/output error" 01:22:03.516 } 01:22:03.516 05:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 01:22:03.516 05:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:22:03.516 05:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:22:03.516 05:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:22:03.516 05:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:22:03.516 05:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:03.516 05:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:03.516 05:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:03.516 05:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key1 01:22:03.516 05:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:03.516 05:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:03.516 05:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:03.516 05:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:22:03.516 05:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 01:22:03.516 05:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:22:03.516 05:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 01:22:03.516 05:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:22:03.516 05:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 01:22:03.516 05:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:22:03.516 05:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:22:03.516 05:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:22:03.516 05:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:22:04.083 request: 01:22:04.083 { 01:22:04.083 "name": "nvme0", 01:22:04.083 "trtype": "tcp", 01:22:04.084 "traddr": "10.0.0.3", 01:22:04.084 "adrfam": "ipv4", 01:22:04.084 "trsvcid": "4420", 01:22:04.084 "subnqn": "nqn.2024-03.io.spdk:cnode0", 01:22:04.084 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b", 01:22:04.084 "prchk_reftag": false, 01:22:04.084 "prchk_guard": false, 01:22:04.084 "hdgst": false, 01:22:04.084 "ddgst": false, 01:22:04.084 "dhchap_key": "key1", 01:22:04.084 "dhchap_ctrlr_key": "ckey1", 01:22:04.084 "allow_unrecognized_csi": false, 01:22:04.084 "method": "bdev_nvme_attach_controller", 01:22:04.084 "req_id": 1 01:22:04.084 } 01:22:04.084 Got JSON-RPC error response 01:22:04.084 response: 01:22:04.084 { 01:22:04.084 "code": -5, 01:22:04.084 "message": "Input/output error" 01:22:04.084 } 01:22:04.084 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 01:22:04.084 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:22:04.084 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:22:04.084 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:22:04.084 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:22:04.084 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:04.084 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:04.084 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:04.084 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 67351 01:22:04.084 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 67351 ']' 01:22:04.084 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 67351 01:22:04.084 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 01:22:04.084 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:22:04.084 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67351 01:22:04.084 killing process with pid 67351 01:22:04.084 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:22:04.084 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:22:04.084 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67351' 01:22:04.084 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 67351 01:22:04.084 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 67351 01:22:04.343 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 01:22:04.343 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:22:04.343 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 01:22:04.343 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:04.343 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 01:22:04.343 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=70168 01:22:04.343 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 70168 01:22:04.343 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 70168 ']' 01:22:04.343 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:22:04.343 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 01:22:04.343 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:22:04.343 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 01:22:04.343 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:05.281 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:22:05.281 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 01:22:05.281 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:22:05.281 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 01:22:05.281 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:05.281 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:22:05.281 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 01:22:05.281 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 70168 01:22:05.281 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 70168 ']' 01:22:05.281 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:22:05.281 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 01:22:05.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:22:05.281 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:22:05.281 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 01:22:05.281 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:05.541 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:22:05.541 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 01:22:05.541 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 01:22:05.541 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:05.541 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:05.541 null0 01:22:05.541 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:05.541 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 01:22:05.541 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.GB5 01:22:05.541 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:05.541 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:05.541 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:05.541 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.TA8 ]] 01:22:05.541 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.TA8 01:22:05.541 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:05.541 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:05.541 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:05.541 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 01:22:05.541 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.305 01:22:05.541 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:05.541 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:05.541 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:05.541 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.Yq0 ]] 01:22:05.541 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Yq0 01:22:05.541 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:05.541 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:05.541 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:05.541 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 01:22:05.541 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.BTy 01:22:05.541 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:05.541 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:05.541 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:05.541 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.eYa ]] 01:22:05.541 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.eYa 01:22:05.542 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:05.542 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:05.801 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:05.801 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 01:22:05.801 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.dQ3 01:22:05.801 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:05.801 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:05.801 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:05.801 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 01:22:05.801 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 01:22:05.801 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:22:05.801 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:22:05.801 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 01:22:05.801 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 01:22:05.801 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:22:05.801 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key3 01:22:05.801 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:05.801 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:05.801 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:05.801 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 01:22:05.801 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:22:05.801 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:22:06.368 nvme0n1 01:22:06.627 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:22:06.627 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:22:06.627 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:22:06.627 05:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:22:06.627 05:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:22:06.627 05:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:06.627 05:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:06.627 05:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:06.627 05:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:22:06.627 { 01:22:06.627 "cntlid": 1, 01:22:06.627 "qid": 0, 01:22:06.627 "state": "enabled", 01:22:06.627 "thread": "nvmf_tgt_poll_group_000", 01:22:06.627 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b", 01:22:06.627 "listen_address": { 01:22:06.627 "trtype": "TCP", 01:22:06.627 "adrfam": "IPv4", 01:22:06.627 "traddr": "10.0.0.3", 01:22:06.627 "trsvcid": "4420" 01:22:06.627 }, 01:22:06.627 "peer_address": { 01:22:06.627 "trtype": "TCP", 01:22:06.627 "adrfam": "IPv4", 01:22:06.627 "traddr": "10.0.0.1", 01:22:06.627 "trsvcid": "49226" 01:22:06.627 }, 01:22:06.627 "auth": { 01:22:06.627 "state": "completed", 01:22:06.627 "digest": "sha512", 01:22:06.627 "dhgroup": "ffdhe8192" 01:22:06.627 } 01:22:06.627 } 01:22:06.627 ]' 01:22:06.627 05:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:22:06.887 05:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:22:06.887 05:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:22:06.887 05:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 01:22:06.887 05:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:22:06.887 05:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:22:06.887 05:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:22:06.887 05:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:22:07.146 05:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzlmMWVmZDI2OWNkMWEwYmVkNjk1OGU1MDI2MDUwODM2YmRjOWFiZGZhNTJhMmZiNjQ1YjM3YWI4YzFjZmExZXKHK6Y=: 01:22:07.146 05:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid 0567fff2-ddaf-4a4f-877d-a2600d7e662b -l 0 --dhchap-secret DHHC-1:03:NzlmMWVmZDI2OWNkMWEwYmVkNjk1OGU1MDI2MDUwODM2YmRjOWFiZGZhNTJhMmZiNjQ1YjM3YWI4YzFjZmExZXKHK6Y=: 01:22:07.715 05:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:22:07.716 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:22:07.716 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:22:07.716 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:07.716 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:07.716 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:07.716 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key3 01:22:07.716 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:07.716 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:07.716 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:07.716 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 01:22:07.716 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 01:22:07.975 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 01:22:07.975 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 01:22:07.975 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 01:22:07.975 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 01:22:07.975 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:22:07.975 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 01:22:07.975 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:22:07.975 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 01:22:07.975 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:22:07.975 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:22:08.232 request: 01:22:08.232 { 01:22:08.232 "name": "nvme0", 01:22:08.232 "trtype": "tcp", 01:22:08.232 "traddr": "10.0.0.3", 01:22:08.232 "adrfam": "ipv4", 01:22:08.232 "trsvcid": "4420", 01:22:08.232 "subnqn": "nqn.2024-03.io.spdk:cnode0", 01:22:08.232 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b", 01:22:08.232 "prchk_reftag": false, 01:22:08.232 "prchk_guard": false, 01:22:08.232 "hdgst": false, 01:22:08.232 "ddgst": false, 01:22:08.232 "dhchap_key": "key3", 01:22:08.232 "allow_unrecognized_csi": false, 01:22:08.232 "method": "bdev_nvme_attach_controller", 01:22:08.232 "req_id": 1 01:22:08.232 } 01:22:08.232 Got JSON-RPC error response 01:22:08.232 response: 01:22:08.232 { 01:22:08.232 "code": -5, 01:22:08.232 "message": "Input/output error" 01:22:08.232 } 01:22:08.232 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 01:22:08.232 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:22:08.232 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:22:08.232 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:22:08.232 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 01:22:08.232 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 01:22:08.232 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 01:22:08.232 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 01:22:08.491 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 01:22:08.491 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 01:22:08.491 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 01:22:08.491 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 01:22:08.491 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:22:08.491 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 01:22:08.491 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:22:08.491 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 01:22:08.491 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:22:08.491 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:22:08.750 request: 01:22:08.750 { 01:22:08.750 "name": "nvme0", 01:22:08.750 "trtype": "tcp", 01:22:08.750 "traddr": "10.0.0.3", 01:22:08.750 "adrfam": "ipv4", 01:22:08.750 "trsvcid": "4420", 01:22:08.750 "subnqn": "nqn.2024-03.io.spdk:cnode0", 01:22:08.750 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b", 01:22:08.750 "prchk_reftag": false, 01:22:08.750 "prchk_guard": false, 01:22:08.750 "hdgst": false, 01:22:08.750 "ddgst": false, 01:22:08.750 "dhchap_key": "key3", 01:22:08.750 "allow_unrecognized_csi": false, 01:22:08.750 "method": "bdev_nvme_attach_controller", 01:22:08.750 "req_id": 1 01:22:08.750 } 01:22:08.750 Got JSON-RPC error response 01:22:08.750 response: 01:22:08.750 { 01:22:08.750 "code": -5, 01:22:08.750 "message": "Input/output error" 01:22:08.750 } 01:22:08.750 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 01:22:08.750 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:22:08.750 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:22:08.750 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:22:08.750 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 01:22:08.750 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 01:22:08.750 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 01:22:08.750 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 01:22:08.750 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 01:22:08.750 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 01:22:08.750 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:22:08.750 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:08.750 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:08.750 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:08.750 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:22:08.750 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:08.750 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:09.009 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:09.009 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 01:22:09.009 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 01:22:09.009 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 01:22:09.009 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 01:22:09.009 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:22:09.009 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 01:22:09.009 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:22:09.009 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 01:22:09.009 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 01:22:09.009 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 01:22:09.268 request: 01:22:09.268 { 01:22:09.268 "name": "nvme0", 01:22:09.268 "trtype": "tcp", 01:22:09.268 "traddr": "10.0.0.3", 01:22:09.268 "adrfam": "ipv4", 01:22:09.268 "trsvcid": "4420", 01:22:09.268 "subnqn": "nqn.2024-03.io.spdk:cnode0", 01:22:09.268 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b", 01:22:09.268 "prchk_reftag": false, 01:22:09.268 "prchk_guard": false, 01:22:09.268 "hdgst": false, 01:22:09.268 "ddgst": false, 01:22:09.268 "dhchap_key": "key0", 01:22:09.268 "dhchap_ctrlr_key": "key1", 01:22:09.268 "allow_unrecognized_csi": false, 01:22:09.268 "method": "bdev_nvme_attach_controller", 01:22:09.268 "req_id": 1 01:22:09.268 } 01:22:09.268 Got JSON-RPC error response 01:22:09.268 response: 01:22:09.268 { 01:22:09.268 "code": -5, 01:22:09.268 "message": "Input/output error" 01:22:09.268 } 01:22:09.268 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 01:22:09.268 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:22:09.268 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:22:09.268 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:22:09.268 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 01:22:09.268 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 01:22:09.268 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 01:22:09.527 nvme0n1 01:22:09.527 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 01:22:09.527 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:22:09.527 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 01:22:09.785 05:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:22:09.785 05:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 01:22:09.786 05:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:22:10.045 05:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key1 01:22:10.045 05:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:10.045 05:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:10.045 05:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:10.045 05:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 01:22:10.045 05:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 01:22:10.045 05:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 01:22:10.988 nvme0n1 01:22:10.988 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 01:22:10.988 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:22:10.988 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 01:22:10.988 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:22:10.988 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key2 --dhchap-ctrlr-key key3 01:22:10.988 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:10.988 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:10.988 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:10.988 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 01:22:10.988 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:22:10.988 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 01:22:11.247 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:22:11.247 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:MzdjNGMxMDA2YjgxNjkwM2EwMDA4OTE1MTNmODhkOTRlODJhNGRjYWIzYWI3YmM5CzI+UQ==: --dhchap-ctrl-secret DHHC-1:03:NzlmMWVmZDI2OWNkMWEwYmVkNjk1OGU1MDI2MDUwODM2YmRjOWFiZGZhNTJhMmZiNjQ1YjM3YWI4YzFjZmExZXKHK6Y=: 01:22:11.248 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid 0567fff2-ddaf-4a4f-877d-a2600d7e662b -l 0 --dhchap-secret DHHC-1:02:MzdjNGMxMDA2YjgxNjkwM2EwMDA4OTE1MTNmODhkOTRlODJhNGRjYWIzYWI3YmM5CzI+UQ==: --dhchap-ctrl-secret DHHC-1:03:NzlmMWVmZDI2OWNkMWEwYmVkNjk1OGU1MDI2MDUwODM2YmRjOWFiZGZhNTJhMmZiNjQ1YjM3YWI4YzFjZmExZXKHK6Y=: 01:22:11.817 05:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 01:22:11.817 05:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 01:22:11.817 05:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 01:22:11.817 05:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 01:22:11.817 05:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 01:22:11.817 05:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 01:22:11.817 05:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 01:22:11.817 05:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 01:22:11.817 05:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:22:12.076 05:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 01:22:12.076 05:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 01:22:12.076 05:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 01:22:12.076 05:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 01:22:12.076 05:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:22:12.076 05:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 01:22:12.076 05:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:22:12.076 05:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 01:22:12.076 05:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 01:22:12.077 05:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 01:22:12.355 request: 01:22:12.355 { 01:22:12.355 "name": "nvme0", 01:22:12.355 "trtype": "tcp", 01:22:12.355 "traddr": "10.0.0.3", 01:22:12.355 "adrfam": "ipv4", 01:22:12.355 "trsvcid": "4420", 01:22:12.355 "subnqn": "nqn.2024-03.io.spdk:cnode0", 01:22:12.355 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b", 01:22:12.355 "prchk_reftag": false, 01:22:12.355 "prchk_guard": false, 01:22:12.355 "hdgst": false, 01:22:12.355 "ddgst": false, 01:22:12.355 "dhchap_key": "key1", 01:22:12.355 "allow_unrecognized_csi": false, 01:22:12.355 "method": "bdev_nvme_attach_controller", 01:22:12.355 "req_id": 1 01:22:12.355 } 01:22:12.355 Got JSON-RPC error response 01:22:12.355 response: 01:22:12.355 { 01:22:12.355 "code": -5, 01:22:12.355 "message": "Input/output error" 01:22:12.355 } 01:22:12.619 05:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 01:22:12.619 05:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:22:12.619 05:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:22:12.619 05:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:22:12.619 05:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 01:22:12.619 05:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 01:22:12.619 05:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 01:22:13.188 nvme0n1 01:22:13.188 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 01:22:13.188 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 01:22:13.188 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:22:13.447 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:22:13.447 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 01:22:13.447 05:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:22:13.706 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:22:13.706 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:13.706 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:13.706 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:13.706 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 01:22:13.706 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 01:22:13.706 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 01:22:13.966 nvme0n1 01:22:13.966 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 01:22:13.966 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:22:13.966 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 01:22:14.225 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:22:14.225 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 01:22:14.225 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:22:14.485 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key1 --dhchap-ctrlr-key key3 01:22:14.485 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:14.485 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:14.485 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:14.485 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:NjJiNGM5NTJjMTgzMWFlZWIxOWEwZmE0MWI2ZjYzYWFcYrja: '' 2s 01:22:14.485 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 01:22:14.485 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 01:22:14.485 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:NjJiNGM5NTJjMTgzMWFlZWIxOWEwZmE0MWI2ZjYzYWFcYrja: 01:22:14.485 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 01:22:14.485 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 01:22:14.485 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 01:22:14.485 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:NjJiNGM5NTJjMTgzMWFlZWIxOWEwZmE0MWI2ZjYzYWFcYrja: ]] 01:22:14.485 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:NjJiNGM5NTJjMTgzMWFlZWIxOWEwZmE0MWI2ZjYzYWFcYrja: 01:22:14.485 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 01:22:14.485 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 01:22:14.485 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 01:22:16.388 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 01:22:16.388 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 01:22:16.388 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 01:22:16.388 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 01:22:16.388 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 01:22:16.388 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 01:22:16.388 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 01:22:16.388 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key1 --dhchap-ctrlr-key key2 01:22:16.388 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:16.388 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:16.388 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:16.388 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:MzdjNGMxMDA2YjgxNjkwM2EwMDA4OTE1MTNmODhkOTRlODJhNGRjYWIzYWI3YmM5CzI+UQ==: 2s 01:22:16.388 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 01:22:16.388 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 01:22:16.388 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 01:22:16.388 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:MzdjNGMxMDA2YjgxNjkwM2EwMDA4OTE1MTNmODhkOTRlODJhNGRjYWIzYWI3YmM5CzI+UQ==: 01:22:16.388 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 01:22:16.388 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 01:22:16.388 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 01:22:16.388 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:MzdjNGMxMDA2YjgxNjkwM2EwMDA4OTE1MTNmODhkOTRlODJhNGRjYWIzYWI3YmM5CzI+UQ==: ]] 01:22:16.388 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:MzdjNGMxMDA2YjgxNjkwM2EwMDA4OTE1MTNmODhkOTRlODJhNGRjYWIzYWI3YmM5CzI+UQ==: 01:22:16.388 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 01:22:16.388 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 01:22:18.919 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 01:22:18.919 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 01:22:18.919 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 01:22:18.919 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 01:22:18.919 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 01:22:18.919 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 01:22:18.919 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 01:22:18.919 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:22:18.919 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:22:18.919 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key0 --dhchap-ctrlr-key key1 01:22:18.919 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:18.919 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:18.919 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:18.919 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 01:22:18.919 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 01:22:18.919 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 01:22:19.488 nvme0n1 01:22:19.488 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key2 --dhchap-ctrlr-key key3 01:22:19.488 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:19.488 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:19.488 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:19.488 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 01:22:19.488 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 01:22:20.058 05:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 01:22:20.058 05:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:22:20.058 05:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 01:22:20.058 05:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:22:20.059 05:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:22:20.059 05:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:20.059 05:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:20.059 05:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:20.059 05:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 01:22:20.059 05:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 01:22:20.319 05:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 01:22:20.319 05:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:22:20.319 05:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 01:22:20.578 05:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:22:20.578 05:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key2 --dhchap-ctrlr-key key3 01:22:20.578 05:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:20.578 05:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:20.578 05:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:20.578 05:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 01:22:20.578 05:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 01:22:20.578 05:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 01:22:20.578 05:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 01:22:20.578 05:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:22:20.578 05:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 01:22:20.578 05:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:22:20.578 05:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 01:22:20.578 05:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 01:22:21.149 request: 01:22:21.149 { 01:22:21.149 "name": "nvme0", 01:22:21.149 "dhchap_key": "key1", 01:22:21.149 "dhchap_ctrlr_key": "key3", 01:22:21.149 "method": "bdev_nvme_set_keys", 01:22:21.149 "req_id": 1 01:22:21.149 } 01:22:21.149 Got JSON-RPC error response 01:22:21.149 response: 01:22:21.149 { 01:22:21.149 "code": -13, 01:22:21.149 "message": "Permission denied" 01:22:21.149 } 01:22:21.149 05:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 01:22:21.149 05:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:22:21.149 05:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:22:21.149 05:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:22:21.149 05:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 01:22:21.149 05:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:22:21.149 05:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 01:22:21.149 05:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 01:22:21.149 05:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 01:22:22.090 05:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 01:22:22.090 05:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 01:22:22.090 05:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:22:22.349 05:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 01:22:22.349 05:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key0 --dhchap-ctrlr-key key1 01:22:22.349 05:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:22.349 05:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:22.349 05:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:22.349 05:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 01:22:22.349 05:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 01:22:22.349 05:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 01:22:23.285 nvme0n1 01:22:23.285 05:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --dhchap-key key2 --dhchap-ctrlr-key key3 01:22:23.285 05:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:23.285 05:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:23.285 05:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:23.285 05:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 01:22:23.285 05:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 01:22:23.285 05:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 01:22:23.285 05:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 01:22:23.285 05:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:22:23.285 05:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 01:22:23.285 05:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:22:23.285 05:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 01:22:23.285 05:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 01:22:23.854 request: 01:22:23.854 { 01:22:23.854 "name": "nvme0", 01:22:23.854 "dhchap_key": "key2", 01:22:23.854 "dhchap_ctrlr_key": "key0", 01:22:23.854 "method": "bdev_nvme_set_keys", 01:22:23.854 "req_id": 1 01:22:23.854 } 01:22:23.854 Got JSON-RPC error response 01:22:23.854 response: 01:22:23.854 { 01:22:23.854 "code": -13, 01:22:23.854 "message": "Permission denied" 01:22:23.854 } 01:22:23.854 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 01:22:23.854 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:22:23.854 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:22:23.854 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:22:23.854 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 01:22:23.854 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:22:23.854 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 01:22:24.114 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 01:22:24.114 05:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 01:22:25.054 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 01:22:25.054 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:22:25.054 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 01:22:25.315 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 01:22:25.315 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 01:22:25.315 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 01:22:25.315 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 67382 01:22:25.315 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 67382 ']' 01:22:25.315 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 67382 01:22:25.315 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 01:22:25.315 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:22:25.315 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67382 01:22:25.315 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:22:25.315 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:22:25.315 killing process with pid 67382 01:22:25.315 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67382' 01:22:25.315 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 67382 01:22:25.315 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 67382 01:22:25.574 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 01:22:25.574 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 01:22:25.574 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 01:22:25.574 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:22:25.574 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 01:22:25.574 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 01:22:25.574 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:22:25.574 rmmod nvme_tcp 01:22:25.574 rmmod nvme_fabrics 01:22:25.574 rmmod nvme_keyring 01:22:25.574 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:22:25.833 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 01:22:25.833 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 01:22:25.833 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 70168 ']' 01:22:25.833 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 70168 01:22:25.833 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 70168 ']' 01:22:25.833 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 70168 01:22:25.833 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 01:22:25.833 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:22:25.833 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70168 01:22:25.833 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:22:25.833 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:22:25.833 killing process with pid 70168 01:22:25.833 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70168' 01:22:25.833 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 70168 01:22:25.833 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 70168 01:22:25.833 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:22:25.833 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:22:25.833 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:22:25.833 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 01:22:25.833 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:22:25.833 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 01:22:25.833 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 01:22:26.093 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:22:26.093 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:22:26.093 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:22:26.093 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:22:26.093 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:22:26.093 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:22:26.093 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:22:26.093 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:22:26.093 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:22:26.093 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:22:26.093 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:22:26.093 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:22:26.093 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:22:26.093 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:22:26.093 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:22:26.093 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 01:22:26.093 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:22:26.093 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:22:26.093 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:22:26.352 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 01:22:26.352 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.GB5 /tmp/spdk.key-sha256.305 /tmp/spdk.key-sha384.BTy /tmp/spdk.key-sha512.dQ3 /tmp/spdk.key-sha512.TA8 /tmp/spdk.key-sha384.Yq0 /tmp/spdk.key-sha256.eYa '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 01:22:26.352 01:22:26.352 real 2m41.230s 01:22:26.352 user 6m17.328s 01:22:26.352 sys 0m26.454s 01:22:26.352 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 01:22:26.352 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:22:26.352 ************************************ 01:22:26.352 END TEST nvmf_auth_target 01:22:26.352 ************************************ 01:22:26.352 05:17:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 01:22:26.352 05:17:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 01:22:26.352 05:17:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:22:26.352 05:17:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 01:22:26.352 05:17:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 01:22:26.352 ************************************ 01:22:26.352 START TEST nvmf_bdevio_no_huge 01:22:26.352 ************************************ 01:22:26.352 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 01:22:26.352 * Looking for test storage... 01:22:26.352 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:22:26.352 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:22:26.352 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 01:22:26.352 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:22:26.613 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:22:26.613 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:22:26.613 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 01:22:26.613 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 01:22:26.613 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 01:22:26.613 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 01:22:26.613 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 01:22:26.613 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 01:22:26.613 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 01:22:26.613 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 01:22:26.613 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 01:22:26.613 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:22:26.613 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 01:22:26.613 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 01:22:26.613 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 01:22:26.613 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:22:26.613 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 01:22:26.613 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 01:22:26.613 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:22:26.613 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 01:22:26.613 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 01:22:26.613 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 01:22:26.613 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 01:22:26.613 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:22:26.613 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 01:22:26.613 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 01:22:26.613 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:22:26.613 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:22:26.613 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 01:22:26.613 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:22:26.614 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:22:26.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:22:26.614 --rc genhtml_branch_coverage=1 01:22:26.614 --rc genhtml_function_coverage=1 01:22:26.614 --rc genhtml_legend=1 01:22:26.614 --rc geninfo_all_blocks=1 01:22:26.614 --rc geninfo_unexecuted_blocks=1 01:22:26.614 01:22:26.614 ' 01:22:26.614 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:22:26.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:22:26.614 --rc genhtml_branch_coverage=1 01:22:26.614 --rc genhtml_function_coverage=1 01:22:26.614 --rc genhtml_legend=1 01:22:26.614 --rc geninfo_all_blocks=1 01:22:26.614 --rc geninfo_unexecuted_blocks=1 01:22:26.614 01:22:26.614 ' 01:22:26.614 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:22:26.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:22:26.614 --rc genhtml_branch_coverage=1 01:22:26.614 --rc genhtml_function_coverage=1 01:22:26.614 --rc genhtml_legend=1 01:22:26.614 --rc geninfo_all_blocks=1 01:22:26.614 --rc geninfo_unexecuted_blocks=1 01:22:26.614 01:22:26.614 ' 01:22:26.614 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:22:26.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:22:26.614 --rc genhtml_branch_coverage=1 01:22:26.614 --rc genhtml_function_coverage=1 01:22:26.614 --rc genhtml_legend=1 01:22:26.614 --rc geninfo_all_blocks=1 01:22:26.614 --rc geninfo_unexecuted_blocks=1 01:22:26.614 01:22:26.614 ' 01:22:26.614 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:22:26.614 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 01:22:26.614 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:22:26.614 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:22:26.614 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:22:26.614 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:22:26.614 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:22:26.614 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:22:26.614 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:22:26.614 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:22:26.614 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:22:26.614 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:22:26.614 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:22:26.614 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:22:26.614 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:22:26.614 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:22:26.614 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:22:26.614 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:22:26.614 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:22:26.614 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 01:22:26.614 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:22:26.614 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:22:26.614 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:22:26.614 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:22:26.614 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:22:26.614 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:22:26.614 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 01:22:26.614 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:22:26.614 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 01:22:26.614 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:22:26.614 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:22:26.614 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:22:26.614 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:22:26.614 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:22:26.614 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:22:26.614 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:22:26.614 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:22:26.614 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:22:26.614 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 01:22:26.614 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 01:22:26.615 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:22:26.615 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 01:22:26.615 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:22:26.615 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:22:26.615 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 01:22:26.615 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 01:22:26.615 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 01:22:26.615 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:22:26.615 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:22:26.615 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:22:26.615 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:22:26.615 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:22:26.615 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:22:26.615 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:22:26.615 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:22:26.615 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@460 -- # nvmf_veth_init 01:22:26.615 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:22:26.615 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:22:26.615 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:22:26.615 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:22:26.615 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:22:26.615 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:22:26.615 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:22:26.615 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:22:26.615 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:22:26.615 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:22:26.615 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:22:26.615 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:22:26.615 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:22:26.615 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:22:26.615 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:22:26.615 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:22:26.615 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:22:26.615 Cannot find device "nvmf_init_br" 01:22:26.615 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 01:22:26.615 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:22:26.615 Cannot find device "nvmf_init_br2" 01:22:26.615 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 01:22:26.615 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:22:26.615 Cannot find device "nvmf_tgt_br" 01:22:26.615 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 01:22:26.615 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:22:26.615 Cannot find device "nvmf_tgt_br2" 01:22:26.615 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 01:22:26.615 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:22:26.615 Cannot find device "nvmf_init_br" 01:22:26.615 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 01:22:26.615 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:22:26.615 Cannot find device "nvmf_init_br2" 01:22:26.615 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 01:22:26.615 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:22:26.615 Cannot find device "nvmf_tgt_br" 01:22:26.615 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 01:22:26.615 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:22:26.615 Cannot find device "nvmf_tgt_br2" 01:22:26.615 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 01:22:26.615 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:22:26.615 Cannot find device "nvmf_br" 01:22:26.615 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 01:22:26.615 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:22:26.876 Cannot find device "nvmf_init_if" 01:22:26.876 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 01:22:26.876 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:22:26.876 Cannot find device "nvmf_init_if2" 01:22:26.876 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 01:22:26.876 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:22:26.876 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:22:26.876 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 01:22:26.876 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:22:26.876 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:22:26.876 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 01:22:26.876 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:22:26.876 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:22:26.876 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:22:26.876 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:22:26.876 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:22:26.876 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:22:26.876 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:22:26.876 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:22:26.876 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:22:26.876 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:22:26.876 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:22:26.876 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:22:26.876 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:22:26.876 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:22:26.876 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:22:26.876 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:22:26.876 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:22:26.876 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:22:26.876 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:22:26.876 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:22:26.876 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:22:26.876 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:22:26.876 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:22:26.876 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:22:26.876 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:22:26.876 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:22:26.876 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:22:26.876 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:22:26.876 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:22:26.876 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:22:26.876 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:22:26.876 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:22:26.876 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:22:26.876 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:22:26.876 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.095 ms 01:22:26.876 01:22:26.876 --- 10.0.0.3 ping statistics --- 01:22:26.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:22:26.876 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 01:22:26.876 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:22:26.876 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:22:26.876 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.095 ms 01:22:26.876 01:22:26.876 --- 10.0.0.4 ping statistics --- 01:22:26.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:22:26.876 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 01:22:26.876 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:22:26.876 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:22:26.876 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 01:22:26.876 01:22:26.876 --- 10.0.0.1 ping statistics --- 01:22:26.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:22:26.876 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 01:22:26.876 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:22:27.136 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:22:27.136 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.108 ms 01:22:27.136 01:22:27.136 --- 10.0.0.2 ping statistics --- 01:22:27.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:22:27.136 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 01:22:27.136 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:22:27.136 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@461 -- # return 0 01:22:27.136 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:22:27.136 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:22:27.136 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:22:27.136 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:22:27.136 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:22:27.136 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:22:27.136 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:22:27.136 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 01:22:27.136 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:22:27.136 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 01:22:27.136 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 01:22:27.136 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=70781 01:22:27.136 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 01:22:27.136 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 70781 01:22:27.136 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 70781 ']' 01:22:27.136 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:22:27.136 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 01:22:27.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:22:27.136 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:22:27.136 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 01:22:27.136 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 01:22:27.136 [2024-12-09 05:17:09.431437] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:22:27.136 [2024-12-09 05:17:09.431506] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --legacy-mem --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 01:22:27.136 [2024-12-09 05:17:09.588042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:22:27.396 [2024-12-09 05:17:09.640470] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:22:27.396 [2024-12-09 05:17:09.640523] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:22:27.396 [2024-12-09 05:17:09.640529] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:22:27.396 [2024-12-09 05:17:09.640533] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:22:27.396 [2024-12-09 05:17:09.640537] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:22:27.396 [2024-12-09 05:17:09.641084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 01:22:27.396 [2024-12-09 05:17:09.641283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 01:22:27.396 [2024-12-09 05:17:09.641474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:22:27.396 [2024-12-09 05:17:09.641476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 01:22:27.396 [2024-12-09 05:17:09.645974] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:22:27.964 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:22:27.964 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 01:22:27.964 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:22:27.964 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 01:22:27.964 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 01:22:27.964 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:22:27.964 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:22:27.964 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:27.964 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 01:22:27.964 [2024-12-09 05:17:10.342671] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:22:27.965 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:27.965 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 01:22:27.965 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:27.965 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 01:22:27.965 Malloc0 01:22:27.965 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:27.965 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:22:27.965 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:27.965 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 01:22:27.965 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:27.965 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:22:27.965 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:27.965 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 01:22:27.965 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:27.965 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:22:27.965 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:27.965 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 01:22:27.965 [2024-12-09 05:17:10.395966] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:22:27.965 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:27.965 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 01:22:27.965 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 01:22:27.965 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 01:22:27.965 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 01:22:27.965 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:22:27.965 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:22:27.965 { 01:22:27.965 "params": { 01:22:27.965 "name": "Nvme$subsystem", 01:22:27.965 "trtype": "$TEST_TRANSPORT", 01:22:27.965 "traddr": "$NVMF_FIRST_TARGET_IP", 01:22:27.965 "adrfam": "ipv4", 01:22:27.965 "trsvcid": "$NVMF_PORT", 01:22:27.965 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:22:27.965 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:22:27.965 "hdgst": ${hdgst:-false}, 01:22:27.965 "ddgst": ${ddgst:-false} 01:22:27.965 }, 01:22:27.965 "method": "bdev_nvme_attach_controller" 01:22:27.965 } 01:22:27.965 EOF 01:22:27.965 )") 01:22:27.965 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 01:22:27.965 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 01:22:28.224 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 01:22:28.224 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:22:28.224 "params": { 01:22:28.224 "name": "Nvme1", 01:22:28.224 "trtype": "tcp", 01:22:28.224 "traddr": "10.0.0.3", 01:22:28.224 "adrfam": "ipv4", 01:22:28.224 "trsvcid": "4420", 01:22:28.224 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:22:28.224 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:22:28.224 "hdgst": false, 01:22:28.224 "ddgst": false 01:22:28.224 }, 01:22:28.224 "method": "bdev_nvme_attach_controller" 01:22:28.224 }' 01:22:28.224 [2024-12-09 05:17:10.453949] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:22:28.224 [2024-12-09 05:17:10.454002] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --legacy-mem --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid70817 ] 01:22:28.224 [2024-12-09 05:17:10.590107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 01:22:28.224 [2024-12-09 05:17:10.640872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:22:28.224 [2024-12-09 05:17:10.640961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:22:28.224 [2024-12-09 05:17:10.640964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:22:28.224 [2024-12-09 05:17:10.654189] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:22:28.483 I/O targets: 01:22:28.483 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 01:22:28.483 01:22:28.483 01:22:28.483 CUnit - A unit testing framework for C - Version 2.1-3 01:22:28.483 http://cunit.sourceforge.net/ 01:22:28.483 01:22:28.483 01:22:28.483 Suite: bdevio tests on: Nvme1n1 01:22:28.483 Test: blockdev write read block ...passed 01:22:28.483 Test: blockdev write zeroes read block ...passed 01:22:28.483 Test: blockdev write zeroes read no split ...passed 01:22:28.483 Test: blockdev write zeroes read split ...passed 01:22:28.483 Test: blockdev write zeroes read split partial ...passed 01:22:28.483 Test: blockdev reset ...[2024-12-09 05:17:10.850236] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 01:22:28.483 [2024-12-09 05:17:10.850337] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd63320 (9): Bad file descriptor 01:22:28.483 [2024-12-09 05:17:10.862399] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 01:22:28.483 passed 01:22:28.483 Test: blockdev write read 8 blocks ...passed 01:22:28.483 Test: blockdev write read size > 128k ...passed 01:22:28.483 Test: blockdev write read invalid size ...passed 01:22:28.483 Test: blockdev write read offset + nbytes == size of blockdev ...passed 01:22:28.483 Test: blockdev write read offset + nbytes > size of blockdev ...passed 01:22:28.483 Test: blockdev write read max offset ...passed 01:22:28.483 Test: blockdev write read 2 blocks on overlapped address offset ...passed 01:22:28.483 Test: blockdev writev readv 8 blocks ...passed 01:22:28.483 Test: blockdev writev readv 30 x 1block ...passed 01:22:28.483 Test: blockdev writev readv block ...passed 01:22:28.483 Test: blockdev writev readv size > 128k ...passed 01:22:28.483 Test: blockdev writev readv size > 128k in two iovs ...passed 01:22:28.483 Test: blockdev comparev and writev ...[2024-12-09 05:17:10.869445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:22:28.483 [2024-12-09 05:17:10.869605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:22:28.483 [2024-12-09 05:17:10.869703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:22:28.483 [2024-12-09 05:17:10.869780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:22:28.483 [2024-12-09 05:17:10.870102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:22:28.483 [2024-12-09 05:17:10.870222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:22:28.483 [2024-12-09 05:17:10.870314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:22:28.483 [2024-12-09 05:17:10.870405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:22:28.483 [2024-12-09 05:17:10.870742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:22:28.483 [2024-12-09 05:17:10.870846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:22:28.483 [2024-12-09 05:17:10.870944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:22:28.483 [2024-12-09 05:17:10.871039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:22:28.483 [2024-12-09 05:17:10.871359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:22:28.483 [2024-12-09 05:17:10.871475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:22:28.483 [2024-12-09 05:17:10.871576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:22:28.483 [2024-12-09 05:17:10.871653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:22:28.483 passed 01:22:28.483 Test: blockdev nvme passthru rw ...passed 01:22:28.483 Test: blockdev nvme passthru vendor specific ...[2024-12-09 05:17:10.872436] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 01:22:28.483 [2024-12-09 05:17:10.872552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:22:28.483 [2024-12-09 05:17:10.872742] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 01:22:28.483 [2024-12-09 05:17:10.872844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:22:28.483 [2024-12-09 05:17:10.873011] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 01:22:28.483 [2024-12-09 05:17:10.873104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:22:28.483 [2024-12-09 05:17:10.873267] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 01:22:28.483 [2024-12-09 05:17:10.873393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:22:28.483 passed 01:22:28.483 Test: blockdev nvme admin passthru ...passed 01:22:28.483 Test: blockdev copy ...passed 01:22:28.483 01:22:28.483 Run Summary: Type Total Ran Passed Failed Inactive 01:22:28.483 suites 1 1 n/a 0 0 01:22:28.483 tests 23 23 23 0 0 01:22:28.483 asserts 152 152 152 0 n/a 01:22:28.483 01:22:28.483 Elapsed time = 0.154 seconds 01:22:28.741 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:22:28.741 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 01:22:28.741 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 01:22:29.000 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:22:29.000 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 01:22:29.000 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 01:22:29.000 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 01:22:29.000 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 01:22:29.000 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:22:29.000 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 01:22:29.000 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 01:22:29.000 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:22:29.000 rmmod nvme_tcp 01:22:29.000 rmmod nvme_fabrics 01:22:29.000 rmmod nvme_keyring 01:22:29.000 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:22:29.000 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 01:22:29.000 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 01:22:29.000 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 70781 ']' 01:22:29.000 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 70781 01:22:29.000 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 70781 ']' 01:22:29.000 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 70781 01:22:29.000 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 01:22:29.000 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:22:29.000 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70781 01:22:29.000 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 01:22:29.000 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 01:22:29.000 killing process with pid 70781 01:22:29.000 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70781' 01:22:29.000 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 70781 01:22:29.000 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 70781 01:22:29.259 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:22:29.259 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:22:29.259 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:22:29.259 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 01:22:29.259 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 01:22:29.259 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:22:29.259 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 01:22:29.259 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:22:29.259 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:22:29.259 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:22:29.259 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:22:29.518 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:22:29.518 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:22:29.518 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:22:29.518 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:22:29.518 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:22:29.518 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:22:29.518 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:22:29.518 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:22:29.518 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:22:29.518 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:22:29.518 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:22:29.518 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 01:22:29.518 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:22:29.518 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:22:29.518 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:22:29.518 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 01:22:29.518 01:22:29.518 real 0m3.326s 01:22:29.518 user 0m9.257s 01:22:29.518 sys 0m1.314s 01:22:29.518 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 01:22:29.518 ************************************ 01:22:29.518 END TEST nvmf_bdevio_no_huge 01:22:29.518 ************************************ 01:22:29.518 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 01:22:29.778 05:17:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 01:22:29.779 05:17:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:22:29.779 05:17:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 01:22:29.779 05:17:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 01:22:29.779 ************************************ 01:22:29.779 START TEST nvmf_tls 01:22:29.779 ************************************ 01:22:29.779 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 01:22:29.779 * Looking for test storage... 01:22:29.779 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:22:29.779 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:22:29.779 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 01:22:29.779 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:22:29.779 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:22:30.039 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:22:30.039 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 01:22:30.039 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 01:22:30.039 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 01:22:30.039 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 01:22:30.039 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 01:22:30.039 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 01:22:30.039 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 01:22:30.039 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 01:22:30.039 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 01:22:30.040 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:22:30.040 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 01:22:30.040 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 01:22:30.040 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 01:22:30.040 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:22:30.040 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 01:22:30.040 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 01:22:30.040 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:22:30.040 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 01:22:30.040 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 01:22:30.040 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 01:22:30.040 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 01:22:30.040 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:22:30.040 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 01:22:30.040 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 01:22:30.040 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:22:30.040 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:22:30.040 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 01:22:30.040 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:22:30.040 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:22:30.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:22:30.040 --rc genhtml_branch_coverage=1 01:22:30.040 --rc genhtml_function_coverage=1 01:22:30.040 --rc genhtml_legend=1 01:22:30.040 --rc geninfo_all_blocks=1 01:22:30.040 --rc geninfo_unexecuted_blocks=1 01:22:30.040 01:22:30.040 ' 01:22:30.040 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:22:30.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:22:30.040 --rc genhtml_branch_coverage=1 01:22:30.040 --rc genhtml_function_coverage=1 01:22:30.040 --rc genhtml_legend=1 01:22:30.040 --rc geninfo_all_blocks=1 01:22:30.040 --rc geninfo_unexecuted_blocks=1 01:22:30.040 01:22:30.040 ' 01:22:30.040 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:22:30.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:22:30.040 --rc genhtml_branch_coverage=1 01:22:30.040 --rc genhtml_function_coverage=1 01:22:30.040 --rc genhtml_legend=1 01:22:30.040 --rc geninfo_all_blocks=1 01:22:30.040 --rc geninfo_unexecuted_blocks=1 01:22:30.040 01:22:30.040 ' 01:22:30.040 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:22:30.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:22:30.040 --rc genhtml_branch_coverage=1 01:22:30.040 --rc genhtml_function_coverage=1 01:22:30.040 --rc genhtml_legend=1 01:22:30.040 --rc geninfo_all_blocks=1 01:22:30.040 --rc geninfo_unexecuted_blocks=1 01:22:30.040 01:22:30.040 ' 01:22:30.040 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:22:30.040 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 01:22:30.040 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:22:30.040 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:22:30.040 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:22:30.040 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:22:30.040 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:22:30.040 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:22:30.040 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:22:30.040 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:22:30.040 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:22:30.040 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:22:30.040 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:22:30.040 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:22:30.040 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:22:30.040 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:22:30.040 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:22:30.040 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:22:30.040 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:22:30.040 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 01:22:30.040 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:22:30.040 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:22:30.040 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:22:30.040 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:22:30.040 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:22:30.040 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:22:30.040 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 01:22:30.040 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:22:30.040 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 01:22:30.040 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:22:30.040 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:22:30.040 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:22:30.040 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:22:30.040 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:22:30.040 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:22:30.040 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:22:30.040 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:22:30.040 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:22:30.040 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 01:22:30.040 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:22:30.040 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 01:22:30.040 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:22:30.040 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:22:30.040 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 01:22:30.040 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 01:22:30.040 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 01:22:30.040 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:22:30.040 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:22:30.040 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:22:30.040 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:22:30.040 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:22:30.040 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:22:30.040 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:22:30.040 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:22:30.040 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@460 -- # nvmf_veth_init 01:22:30.040 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:22:30.040 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:22:30.041 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:22:30.041 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:22:30.041 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:22:30.041 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:22:30.041 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:22:30.041 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:22:30.041 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:22:30.041 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:22:30.041 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:22:30.041 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:22:30.041 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:22:30.041 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:22:30.041 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:22:30.041 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:22:30.041 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:22:30.041 Cannot find device "nvmf_init_br" 01:22:30.041 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 01:22:30.041 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:22:30.041 Cannot find device "nvmf_init_br2" 01:22:30.041 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 01:22:30.041 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:22:30.041 Cannot find device "nvmf_tgt_br" 01:22:30.041 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 01:22:30.041 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:22:30.041 Cannot find device "nvmf_tgt_br2" 01:22:30.041 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 01:22:30.041 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:22:30.041 Cannot find device "nvmf_init_br" 01:22:30.041 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 01:22:30.041 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:22:30.041 Cannot find device "nvmf_init_br2" 01:22:30.041 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 01:22:30.041 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:22:30.041 Cannot find device "nvmf_tgt_br" 01:22:30.041 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 01:22:30.041 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:22:30.041 Cannot find device "nvmf_tgt_br2" 01:22:30.041 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 01:22:30.041 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:22:30.041 Cannot find device "nvmf_br" 01:22:30.041 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 01:22:30.041 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:22:30.041 Cannot find device "nvmf_init_if" 01:22:30.041 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 01:22:30.041 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:22:30.302 Cannot find device "nvmf_init_if2" 01:22:30.302 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 01:22:30.302 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:22:30.302 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:22:30.302 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 01:22:30.302 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:22:30.302 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:22:30.302 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 01:22:30.302 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:22:30.302 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:22:30.302 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:22:30.302 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:22:30.302 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:22:30.302 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:22:30.302 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:22:30.302 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:22:30.302 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:22:30.302 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:22:30.302 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:22:30.302 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:22:30.302 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:22:30.302 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:22:30.302 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:22:30.302 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:22:30.302 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:22:30.302 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:22:30.302 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:22:30.302 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:22:30.302 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:22:30.302 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:22:30.302 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:22:30.302 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:22:30.302 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:22:30.302 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:22:30.302 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:22:30.302 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:22:30.302 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:22:30.302 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:22:30.302 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:22:30.302 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:22:30.302 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:22:30.302 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:22:30.302 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 01:22:30.302 01:22:30.302 --- 10.0.0.3 ping statistics --- 01:22:30.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:22:30.302 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 01:22:30.302 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:22:30.302 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:22:30.302 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.098 ms 01:22:30.302 01:22:30.302 --- 10.0.0.4 ping statistics --- 01:22:30.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:22:30.302 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 01:22:30.302 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:22:30.562 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:22:30.562 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.055 ms 01:22:30.562 01:22:30.562 --- 10.0.0.1 ping statistics --- 01:22:30.562 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:22:30.562 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 01:22:30.562 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:22:30.562 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:22:30.562 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 01:22:30.562 01:22:30.562 --- 10.0.0.2 ping statistics --- 01:22:30.562 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:22:30.562 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 01:22:30.562 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:22:30.562 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@461 -- # return 0 01:22:30.562 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:22:30.562 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:22:30.562 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:22:30.562 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:22:30.562 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:22:30.562 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:22:30.562 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:22:30.562 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 01:22:30.562 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:22:30.562 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 01:22:30.562 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:22:30.562 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71051 01:22:30.562 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 01:22:30.562 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71051 01:22:30.562 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71051 ']' 01:22:30.562 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:22:30.562 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 01:22:30.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:22:30.562 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:22:30.562 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 01:22:30.562 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:22:30.562 [2024-12-09 05:17:12.863799] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:22:30.562 [2024-12-09 05:17:12.863883] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:22:30.562 [2024-12-09 05:17:13.010668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:22:30.822 [2024-12-09 05:17:13.062908] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:22:30.822 [2024-12-09 05:17:13.062952] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:22:30.822 [2024-12-09 05:17:13.062958] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:22:30.822 [2024-12-09 05:17:13.062963] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:22:30.822 [2024-12-09 05:17:13.062967] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:22:30.822 [2024-12-09 05:17:13.063252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:22:31.392 05:17:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:22:31.392 05:17:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 01:22:31.392 05:17:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:22:31.392 05:17:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 01:22:31.392 05:17:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:22:31.392 05:17:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:22:31.392 05:17:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 01:22:31.392 05:17:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 01:22:31.652 true 01:22:31.652 05:17:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 01:22:31.652 05:17:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 01:22:31.912 05:17:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 01:22:31.912 05:17:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 01:22:31.912 05:17:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 01:22:32.171 05:17:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 01:22:32.171 05:17:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 01:22:32.431 05:17:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 01:22:32.431 05:17:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 01:22:32.431 05:17:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 01:22:32.431 05:17:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 01:22:32.431 05:17:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 01:22:32.690 05:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 01:22:32.690 05:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 01:22:32.690 05:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 01:22:32.690 05:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 01:22:32.949 05:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 01:22:32.949 05:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 01:22:32.949 05:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 01:22:33.207 05:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 01:22:33.207 05:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 01:22:33.467 05:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 01:22:33.467 05:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 01:22:33.467 05:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 01:22:33.467 05:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 01:22:33.467 05:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 01:22:33.726 05:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 01:22:33.726 05:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 01:22:33.726 05:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 01:22:33.726 05:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 01:22:33.726 05:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 01:22:33.726 05:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 01:22:33.726 05:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 01:22:33.726 05:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 01:22:33.726 05:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 01:22:33.726 05:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 01:22:33.726 05:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 01:22:33.726 05:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 01:22:33.726 05:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 01:22:33.726 05:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 01:22:33.726 05:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 01:22:33.726 05:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 01:22:33.726 05:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 01:22:33.985 05:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 01:22:33.985 05:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 01:22:33.985 05:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.ye9BBM2pOr 01:22:33.985 05:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 01:22:33.985 05:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.amUlndqDMK 01:22:33.985 05:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 01:22:33.985 05:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 01:22:33.985 05:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.ye9BBM2pOr 01:22:33.985 05:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.amUlndqDMK 01:22:33.985 05:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 01:22:33.985 05:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 01:22:34.245 [2024-12-09 05:17:16.637560] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:22:34.245 05:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.ye9BBM2pOr 01:22:34.245 05:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ye9BBM2pOr 01:22:34.245 05:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 01:22:34.504 [2024-12-09 05:17:16.875304] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:22:34.504 05:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 01:22:34.764 05:17:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 01:22:35.024 [2024-12-09 05:17:17.314544] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:22:35.024 [2024-12-09 05:17:17.314715] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:22:35.024 05:17:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 01:22:35.284 malloc0 01:22:35.284 05:17:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 01:22:35.543 05:17:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ye9BBM2pOr 01:22:35.543 05:17:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 01:22:35.803 05:17:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.ye9BBM2pOr 01:22:48.027 Initializing NVMe Controllers 01:22:48.027 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 01:22:48.027 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 01:22:48.027 Initialization complete. Launching workers. 01:22:48.027 ======================================================== 01:22:48.027 Latency(us) 01:22:48.027 Device Information : IOPS MiB/s Average min max 01:22:48.027 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16182.89 63.21 3955.23 779.73 5258.75 01:22:48.027 ======================================================== 01:22:48.027 Total : 16182.89 63.21 3955.23 779.73 5258.75 01:22:48.027 01:22:48.027 05:17:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ye9BBM2pOr 01:22:48.027 05:17:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 01:22:48.027 05:17:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 01:22:48.027 05:17:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 01:22:48.027 05:17:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ye9BBM2pOr 01:22:48.027 05:17:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:22:48.027 05:17:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71278 01:22:48.027 05:17:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:22:48.027 05:17:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71278 /var/tmp/bdevperf.sock 01:22:48.027 05:17:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71278 ']' 01:22:48.027 05:17:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:22:48.027 05:17:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 01:22:48.027 05:17:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:22:48.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:22:48.027 05:17:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 01:22:48.027 05:17:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:22:48.027 05:17:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 01:22:48.027 [2024-12-09 05:17:28.377391] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:22:48.027 [2024-12-09 05:17:28.377461] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71278 ] 01:22:48.027 [2024-12-09 05:17:28.530025] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:22:48.027 [2024-12-09 05:17:28.576372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:22:48.027 [2024-12-09 05:17:28.616988] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:22:48.027 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:22:48.027 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 01:22:48.027 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ye9BBM2pOr 01:22:48.027 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 01:22:48.027 [2024-12-09 05:17:29.617475] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:22:48.027 TLSTESTn1 01:22:48.027 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 01:22:48.027 Running I/O for 10 seconds... 01:22:49.538 6441.00 IOPS, 25.16 MiB/s [2024-12-09T05:17:32.934Z] 6489.50 IOPS, 25.35 MiB/s [2024-12-09T05:17:33.873Z] 6511.67 IOPS, 25.44 MiB/s [2024-12-09T05:17:34.827Z] 6531.75 IOPS, 25.51 MiB/s [2024-12-09T05:17:36.210Z] 6533.20 IOPS, 25.52 MiB/s [2024-12-09T05:17:37.148Z] 6532.00 IOPS, 25.52 MiB/s [2024-12-09T05:17:38.084Z] 6535.29 IOPS, 25.53 MiB/s [2024-12-09T05:17:39.020Z] 6535.00 IOPS, 25.53 MiB/s [2024-12-09T05:17:39.957Z] 6532.89 IOPS, 25.52 MiB/s [2024-12-09T05:17:39.957Z] 6532.40 IOPS, 25.52 MiB/s 01:22:57.501 Latency(us) 01:22:57.501 [2024-12-09T05:17:39.957Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:22:57.501 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 01:22:57.501 Verification LBA range: start 0x0 length 0x2000 01:22:57.501 TLSTESTn1 : 10.01 6538.77 25.54 0.00 0.00 19545.30 3448.51 16255.22 01:22:57.501 [2024-12-09T05:17:39.957Z] =================================================================================================================== 01:22:57.501 [2024-12-09T05:17:39.957Z] Total : 6538.77 25.54 0.00 0.00 19545.30 3448.51 16255.22 01:22:57.501 { 01:22:57.501 "results": [ 01:22:57.501 { 01:22:57.501 "job": "TLSTESTn1", 01:22:57.501 "core_mask": "0x4", 01:22:57.501 "workload": "verify", 01:22:57.501 "status": "finished", 01:22:57.501 "verify_range": { 01:22:57.501 "start": 0, 01:22:57.501 "length": 8192 01:22:57.501 }, 01:22:57.501 "queue_depth": 128, 01:22:57.501 "io_size": 4096, 01:22:57.501 "runtime": 10.009685, 01:22:57.501 "iops": 6538.7672039629615, 01:22:57.501 "mibps": 25.54205939048032, 01:22:57.501 "io_failed": 0, 01:22:57.501 "io_timeout": 0, 01:22:57.501 "avg_latency_us": 19545.30172746317, 01:22:57.501 "min_latency_us": 3448.5100436681223, 01:22:57.501 "max_latency_us": 16255.217467248909 01:22:57.501 } 01:22:57.501 ], 01:22:57.501 "core_count": 1 01:22:57.501 } 01:22:57.501 05:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:22:57.501 05:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 71278 01:22:57.501 05:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71278 ']' 01:22:57.501 05:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71278 01:22:57.501 05:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 01:22:57.501 05:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:22:57.501 05:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71278 01:22:57.501 05:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 01:22:57.501 05:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 01:22:57.501 05:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71278' 01:22:57.501 killing process with pid 71278 01:22:57.501 05:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71278 01:22:57.501 Received shutdown signal, test time was about 10.000000 seconds 01:22:57.501 01:22:57.501 Latency(us) 01:22:57.501 [2024-12-09T05:17:39.957Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:22:57.501 [2024-12-09T05:17:39.957Z] =================================================================================================================== 01:22:57.501 [2024-12-09T05:17:39.957Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:22:57.501 05:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71278 01:22:57.761 05:17:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.amUlndqDMK 01:22:57.761 05:17:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 01:22:57.761 05:17:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.amUlndqDMK 01:22:57.761 05:17:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 01:22:57.761 05:17:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:22:57.762 05:17:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 01:22:57.762 05:17:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:22:57.762 05:17:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.amUlndqDMK 01:22:57.762 05:17:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 01:22:57.762 05:17:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 01:22:57.762 05:17:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 01:22:57.762 05:17:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.amUlndqDMK 01:22:57.762 05:17:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:22:57.762 05:17:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 01:22:57.762 05:17:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71414 01:22:57.762 05:17:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:22:57.762 05:17:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71414 /var/tmp/bdevperf.sock 01:22:57.762 05:17:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71414 ']' 01:22:57.762 05:17:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:22:57.762 05:17:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 01:22:57.762 05:17:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:22:57.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:22:57.762 05:17:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 01:22:57.762 05:17:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:22:57.762 [2024-12-09 05:17:40.109230] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:22:57.762 [2024-12-09 05:17:40.109361] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71414 ] 01:22:58.022 [2024-12-09 05:17:40.244739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:22:58.022 [2024-12-09 05:17:40.297750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:22:58.022 [2024-12-09 05:17:40.338138] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:22:58.592 05:17:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:22:58.592 05:17:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 01:22:58.592 05:17:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.amUlndqDMK 01:22:58.852 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 01:22:59.112 [2024-12-09 05:17:41.363075] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:22:59.112 [2024-12-09 05:17:41.367643] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 01:22:59.112 [2024-12-09 05:17:41.368312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeaaff0 (107): Transport endpoint is not connected 01:22:59.112 [2024-12-09 05:17:41.369302] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeaaff0 (9): Bad file descriptor 01:22:59.112 [2024-12-09 05:17:41.370298] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 01:22:59.112 [2024-12-09 05:17:41.370353] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 01:22:59.112 [2024-12-09 05:17:41.370380] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 01:22:59.112 [2024-12-09 05:17:41.370423] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 01:22:59.112 request: 01:22:59.112 { 01:22:59.112 "name": "TLSTEST", 01:22:59.112 "trtype": "tcp", 01:22:59.112 "traddr": "10.0.0.3", 01:22:59.112 "adrfam": "ipv4", 01:22:59.112 "trsvcid": "4420", 01:22:59.112 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:22:59.112 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:22:59.112 "prchk_reftag": false, 01:22:59.112 "prchk_guard": false, 01:22:59.112 "hdgst": false, 01:22:59.112 "ddgst": false, 01:22:59.112 "psk": "key0", 01:22:59.112 "allow_unrecognized_csi": false, 01:22:59.112 "method": "bdev_nvme_attach_controller", 01:22:59.113 "req_id": 1 01:22:59.113 } 01:22:59.113 Got JSON-RPC error response 01:22:59.113 response: 01:22:59.113 { 01:22:59.113 "code": -5, 01:22:59.113 "message": "Input/output error" 01:22:59.113 } 01:22:59.113 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71414 01:22:59.113 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71414 ']' 01:22:59.113 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71414 01:22:59.113 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 01:22:59.113 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:22:59.113 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71414 01:22:59.113 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 01:22:59.113 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 01:22:59.113 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71414' 01:22:59.113 killing process with pid 71414 01:22:59.113 Received shutdown signal, test time was about 10.000000 seconds 01:22:59.113 01:22:59.113 Latency(us) 01:22:59.113 [2024-12-09T05:17:41.569Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:22:59.113 [2024-12-09T05:17:41.569Z] =================================================================================================================== 01:22:59.113 [2024-12-09T05:17:41.569Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 01:22:59.113 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71414 01:22:59.113 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71414 01:22:59.373 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 01:22:59.373 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 01:22:59.373 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:22:59.373 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:22:59.373 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:22:59.373 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ye9BBM2pOr 01:22:59.373 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 01:22:59.374 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ye9BBM2pOr 01:22:59.374 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 01:22:59.374 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:22:59.374 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 01:22:59.374 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:22:59.374 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ye9BBM2pOr 01:22:59.374 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 01:22:59.374 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 01:22:59.374 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 01:22:59.374 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ye9BBM2pOr 01:22:59.374 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:22:59.374 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71441 01:22:59.374 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:22:59.374 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 01:22:59.374 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71441 /var/tmp/bdevperf.sock 01:22:59.374 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71441 ']' 01:22:59.374 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:22:59.374 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 01:22:59.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:22:59.374 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:22:59.374 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 01:22:59.374 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:22:59.374 [2024-12-09 05:17:41.666666] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:22:59.374 [2024-12-09 05:17:41.666800] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71441 ] 01:22:59.374 [2024-12-09 05:17:41.805620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:22:59.634 [2024-12-09 05:17:41.856649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:22:59.634 [2024-12-09 05:17:41.897190] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:23:00.203 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:23:00.204 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 01:23:00.204 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ye9BBM2pOr 01:23:00.463 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 01:23:00.722 [2024-12-09 05:17:42.949891] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:23:00.722 [2024-12-09 05:17:42.954402] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 01:23:00.722 [2024-12-09 05:17:42.954542] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 01:23:00.722 [2024-12-09 05:17:42.954625] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 01:23:00.722 [2024-12-09 05:17:42.955184] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b9ff0 (107): Transport endpoint is not connected 01:23:00.722 [2024-12-09 05:17:42.956173] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b9ff0 (9): Bad file descriptor 01:23:00.722 [2024-12-09 05:17:42.957168] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 01:23:00.722 [2024-12-09 05:17:42.957213] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 01:23:00.722 [2024-12-09 05:17:42.957242] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 01:23:00.722 [2024-12-09 05:17:42.957286] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 01:23:00.722 request: 01:23:00.722 { 01:23:00.722 "name": "TLSTEST", 01:23:00.722 "trtype": "tcp", 01:23:00.722 "traddr": "10.0.0.3", 01:23:00.722 "adrfam": "ipv4", 01:23:00.722 "trsvcid": "4420", 01:23:00.722 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:23:00.722 "hostnqn": "nqn.2016-06.io.spdk:host2", 01:23:00.722 "prchk_reftag": false, 01:23:00.722 "prchk_guard": false, 01:23:00.722 "hdgst": false, 01:23:00.722 "ddgst": false, 01:23:00.722 "psk": "key0", 01:23:00.722 "allow_unrecognized_csi": false, 01:23:00.722 "method": "bdev_nvme_attach_controller", 01:23:00.722 "req_id": 1 01:23:00.722 } 01:23:00.722 Got JSON-RPC error response 01:23:00.722 response: 01:23:00.722 { 01:23:00.722 "code": -5, 01:23:00.722 "message": "Input/output error" 01:23:00.722 } 01:23:00.722 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71441 01:23:00.722 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71441 ']' 01:23:00.722 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71441 01:23:00.722 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 01:23:00.722 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:23:00.722 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71441 01:23:00.722 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 01:23:00.722 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 01:23:00.722 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71441' 01:23:00.722 killing process with pid 71441 01:23:00.722 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71441 01:23:00.722 Received shutdown signal, test time was about 10.000000 seconds 01:23:00.722 01:23:00.722 Latency(us) 01:23:00.722 [2024-12-09T05:17:43.178Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:23:00.722 [2024-12-09T05:17:43.178Z] =================================================================================================================== 01:23:00.722 [2024-12-09T05:17:43.178Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 01:23:00.722 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71441 01:23:00.982 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 01:23:00.982 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 01:23:00.982 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:23:00.982 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:23:00.982 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:23:00.982 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ye9BBM2pOr 01:23:00.982 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 01:23:00.982 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ye9BBM2pOr 01:23:00.982 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 01:23:00.982 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:23:00.982 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 01:23:00.982 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:23:00.982 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ye9BBM2pOr 01:23:00.982 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 01:23:00.982 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 01:23:00.982 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 01:23:00.982 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ye9BBM2pOr 01:23:00.982 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:23:00.982 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 01:23:00.982 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71471 01:23:00.982 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:23:00.982 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71471 /var/tmp/bdevperf.sock 01:23:00.982 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71471 ']' 01:23:00.982 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:23:00.982 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 01:23:00.982 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:23:00.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:23:00.982 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 01:23:00.982 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:23:00.983 [2024-12-09 05:17:43.255103] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:23:00.983 [2024-12-09 05:17:43.255233] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71471 ] 01:23:00.983 [2024-12-09 05:17:43.400891] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:23:01.242 [2024-12-09 05:17:43.450195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:23:01.242 [2024-12-09 05:17:43.491001] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:23:01.242 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:23:01.242 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 01:23:01.242 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ye9BBM2pOr 01:23:01.501 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 01:23:01.501 [2024-12-09 05:17:43.948074] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:23:01.501 [2024-12-09 05:17:43.952547] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 01:23:01.501 [2024-12-09 05:17:43.952658] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 01:23:01.501 [2024-12-09 05:17:43.952731] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 01:23:01.501 [2024-12-09 05:17:43.953335] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa3fff0 (107): Transport endpoint is not connected 01:23:01.501 [2024-12-09 05:17:43.954331] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa3fff0 (9): Bad file descriptor 01:23:01.501 [2024-12-09 05:17:43.955320] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 01:23:01.501 [2024-12-09 05:17:43.955379] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 01:23:01.502 [2024-12-09 05:17:43.955404] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 01:23:01.502 [2024-12-09 05:17:43.955445] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 01:23:01.761 request: 01:23:01.761 { 01:23:01.761 "name": "TLSTEST", 01:23:01.761 "trtype": "tcp", 01:23:01.761 "traddr": "10.0.0.3", 01:23:01.761 "adrfam": "ipv4", 01:23:01.761 "trsvcid": "4420", 01:23:01.761 "subnqn": "nqn.2016-06.io.spdk:cnode2", 01:23:01.761 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:23:01.761 "prchk_reftag": false, 01:23:01.761 "prchk_guard": false, 01:23:01.761 "hdgst": false, 01:23:01.761 "ddgst": false, 01:23:01.761 "psk": "key0", 01:23:01.761 "allow_unrecognized_csi": false, 01:23:01.761 "method": "bdev_nvme_attach_controller", 01:23:01.761 "req_id": 1 01:23:01.761 } 01:23:01.761 Got JSON-RPC error response 01:23:01.761 response: 01:23:01.761 { 01:23:01.761 "code": -5, 01:23:01.761 "message": "Input/output error" 01:23:01.761 } 01:23:01.761 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71471 01:23:01.761 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71471 ']' 01:23:01.761 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71471 01:23:01.761 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 01:23:01.761 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:23:01.761 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71471 01:23:01.761 killing process with pid 71471 01:23:01.761 Received shutdown signal, test time was about 10.000000 seconds 01:23:01.761 01:23:01.761 Latency(us) 01:23:01.761 [2024-12-09T05:17:44.217Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:23:01.761 [2024-12-09T05:17:44.217Z] =================================================================================================================== 01:23:01.761 [2024-12-09T05:17:44.217Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 01:23:01.761 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 01:23:01.761 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 01:23:01.761 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71471' 01:23:01.761 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71471 01:23:01.761 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71471 01:23:01.761 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 01:23:01.761 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 01:23:01.761 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:23:01.761 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:23:01.761 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:23:01.761 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 01:23:01.761 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 01:23:01.761 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 01:23:01.761 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 01:23:02.021 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:23:02.021 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 01:23:02.021 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:23:02.021 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 01:23:02.021 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 01:23:02.021 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 01:23:02.021 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 01:23:02.021 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 01:23:02.021 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:23:02.021 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 01:23:02.021 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71492 01:23:02.021 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:23:02.021 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71492 /var/tmp/bdevperf.sock 01:23:02.021 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71492 ']' 01:23:02.021 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:23:02.021 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 01:23:02.021 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:23:02.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:23:02.021 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 01:23:02.021 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:23:02.021 [2024-12-09 05:17:44.254532] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:23:02.021 [2024-12-09 05:17:44.254634] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71492 ] 01:23:02.021 [2024-12-09 05:17:44.393685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:23:02.021 [2024-12-09 05:17:44.445035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:23:02.280 [2024-12-09 05:17:44.486044] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:23:02.849 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:23:02.849 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 01:23:02.849 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 01:23:03.107 [2024-12-09 05:17:45.327022] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 01:23:03.107 [2024-12-09 05:17:45.327161] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 01:23:03.107 request: 01:23:03.107 { 01:23:03.107 "name": "key0", 01:23:03.107 "path": "", 01:23:03.107 "method": "keyring_file_add_key", 01:23:03.107 "req_id": 1 01:23:03.107 } 01:23:03.107 Got JSON-RPC error response 01:23:03.107 response: 01:23:03.107 { 01:23:03.107 "code": -1, 01:23:03.107 "message": "Operation not permitted" 01:23:03.107 } 01:23:03.107 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 01:23:03.107 [2024-12-09 05:17:45.542737] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:23:03.107 [2024-12-09 05:17:45.542787] bdev_nvme.c:6722:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 01:23:03.107 request: 01:23:03.107 { 01:23:03.107 "name": "TLSTEST", 01:23:03.107 "trtype": "tcp", 01:23:03.107 "traddr": "10.0.0.3", 01:23:03.107 "adrfam": "ipv4", 01:23:03.107 "trsvcid": "4420", 01:23:03.107 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:23:03.107 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:23:03.107 "prchk_reftag": false, 01:23:03.107 "prchk_guard": false, 01:23:03.107 "hdgst": false, 01:23:03.107 "ddgst": false, 01:23:03.107 "psk": "key0", 01:23:03.107 "allow_unrecognized_csi": false, 01:23:03.107 "method": "bdev_nvme_attach_controller", 01:23:03.107 "req_id": 1 01:23:03.107 } 01:23:03.107 Got JSON-RPC error response 01:23:03.107 response: 01:23:03.107 { 01:23:03.107 "code": -126, 01:23:03.107 "message": "Required key not available" 01:23:03.107 } 01:23:03.373 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71492 01:23:03.373 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71492 ']' 01:23:03.373 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71492 01:23:03.373 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 01:23:03.373 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:23:03.373 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71492 01:23:03.373 killing process with pid 71492 01:23:03.373 Received shutdown signal, test time was about 10.000000 seconds 01:23:03.373 01:23:03.373 Latency(us) 01:23:03.373 [2024-12-09T05:17:45.829Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:23:03.373 [2024-12-09T05:17:45.829Z] =================================================================================================================== 01:23:03.373 [2024-12-09T05:17:45.829Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 01:23:03.373 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 01:23:03.373 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 01:23:03.373 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71492' 01:23:03.373 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71492 01:23:03.373 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71492 01:23:03.373 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 01:23:03.373 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 01:23:03.373 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:23:03.373 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:23:03.373 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:23:03.373 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 71051 01:23:03.373 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71051 ']' 01:23:03.373 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71051 01:23:03.373 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 01:23:03.373 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:23:03.373 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71051 01:23:03.373 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:23:03.635 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:23:03.635 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71051' 01:23:03.635 killing process with pid 71051 01:23:03.635 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71051 01:23:03.635 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71051 01:23:03.635 05:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 01:23:03.635 05:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 01:23:03.635 05:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 01:23:03.635 05:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 01:23:03.635 05:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 01:23:03.635 05:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 01:23:03.635 05:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 01:23:03.635 05:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 01:23:03.894 05:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 01:23:03.894 05:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.rund1qqybI 01:23:03.894 05:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 01:23:03.894 05:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.rund1qqybI 01:23:03.894 05:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 01:23:03.894 05:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:23:03.894 05:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 01:23:03.894 05:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:23:03.894 05:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71530 01:23:03.894 05:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 01:23:03.894 05:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71530 01:23:03.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:23:03.894 05:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71530 ']' 01:23:03.894 05:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:23:03.894 05:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 01:23:03.894 05:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:23:03.894 05:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 01:23:03.894 05:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:23:03.894 [2024-12-09 05:17:46.164220] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:23:03.894 [2024-12-09 05:17:46.164289] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:23:03.894 [2024-12-09 05:17:46.313193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:23:04.153 [2024-12-09 05:17:46.363309] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:23:04.153 [2024-12-09 05:17:46.363378] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:23:04.153 [2024-12-09 05:17:46.363385] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:23:04.153 [2024-12-09 05:17:46.363389] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:23:04.153 [2024-12-09 05:17:46.363393] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:23:04.153 [2024-12-09 05:17:46.363663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:23:04.153 [2024-12-09 05:17:46.404564] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:23:04.720 05:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:23:04.720 05:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 01:23:04.720 05:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:23:04.720 05:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 01:23:04.720 05:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:23:04.720 05:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:23:04.720 05:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.rund1qqybI 01:23:04.720 05:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.rund1qqybI 01:23:04.720 05:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 01:23:04.979 [2024-12-09 05:17:47.293425] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:23:04.979 05:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 01:23:05.239 05:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 01:23:05.497 [2024-12-09 05:17:47.704672] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:23:05.497 [2024-12-09 05:17:47.704855] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:23:05.497 05:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 01:23:05.497 malloc0 01:23:05.497 05:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 01:23:05.754 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.rund1qqybI 01:23:06.013 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 01:23:06.272 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.rund1qqybI 01:23:06.272 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 01:23:06.272 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 01:23:06.272 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 01:23:06.272 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.rund1qqybI 01:23:06.272 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:23:06.272 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71586 01:23:06.272 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 01:23:06.272 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:23:06.272 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71586 /var/tmp/bdevperf.sock 01:23:06.272 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71586 ']' 01:23:06.272 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:23:06.272 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 01:23:06.272 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:23:06.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:23:06.272 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 01:23:06.272 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:23:06.272 [2024-12-09 05:17:48.570464] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:23:06.272 [2024-12-09 05:17:48.570632] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71586 ] 01:23:06.272 [2024-12-09 05:17:48.720964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:23:06.530 [2024-12-09 05:17:48.772785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:23:06.530 [2024-12-09 05:17:48.813878] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:23:07.097 05:17:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:23:07.097 05:17:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 01:23:07.097 05:17:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.rund1qqybI 01:23:07.356 05:17:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 01:23:07.356 [2024-12-09 05:17:49.801675] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:23:07.614 TLSTESTn1 01:23:07.614 05:17:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 01:23:07.614 Running I/O for 10 seconds... 01:23:09.949 6395.00 IOPS, 24.98 MiB/s [2024-12-09T05:17:53.342Z] 6427.50 IOPS, 25.11 MiB/s [2024-12-09T05:17:54.280Z] 6423.33 IOPS, 25.09 MiB/s [2024-12-09T05:17:55.245Z] 6457.75 IOPS, 25.23 MiB/s [2024-12-09T05:17:56.182Z] 6474.40 IOPS, 25.29 MiB/s [2024-12-09T05:17:57.114Z] 6488.00 IOPS, 25.34 MiB/s [2024-12-09T05:17:58.048Z] 6485.00 IOPS, 25.33 MiB/s [2024-12-09T05:17:58.983Z] 6180.00 IOPS, 24.14 MiB/s [2024-12-09T05:18:00.359Z] 5923.56 IOPS, 23.14 MiB/s [2024-12-09T05:18:00.359Z] 5814.90 IOPS, 22.71 MiB/s 01:23:17.904 Latency(us) 01:23:17.904 [2024-12-09T05:18:00.360Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:23:17.904 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 01:23:17.904 Verification LBA range: start 0x0 length 0x2000 01:23:17.904 TLSTESTn1 : 10.01 5820.48 22.74 0.00 0.00 21955.28 3863.48 29534.13 01:23:17.904 [2024-12-09T05:18:00.360Z] =================================================================================================================== 01:23:17.904 [2024-12-09T05:18:00.360Z] Total : 5820.48 22.74 0.00 0.00 21955.28 3863.48 29534.13 01:23:17.904 { 01:23:17.904 "results": [ 01:23:17.904 { 01:23:17.904 "job": "TLSTESTn1", 01:23:17.904 "core_mask": "0x4", 01:23:17.904 "workload": "verify", 01:23:17.904 "status": "finished", 01:23:17.904 "verify_range": { 01:23:17.904 "start": 0, 01:23:17.904 "length": 8192 01:23:17.904 }, 01:23:17.904 "queue_depth": 128, 01:23:17.904 "io_size": 4096, 01:23:17.904 "runtime": 10.011374, 01:23:17.904 "iops": 5820.479786291073, 01:23:17.904 "mibps": 22.736249165199503, 01:23:17.904 "io_failed": 0, 01:23:17.904 "io_timeout": 0, 01:23:17.904 "avg_latency_us": 21955.27574091212, 01:23:17.904 "min_latency_us": 3863.475982532751, 01:23:17.904 "max_latency_us": 29534.12751091703 01:23:17.904 } 01:23:17.904 ], 01:23:17.904 "core_count": 1 01:23:17.904 } 01:23:17.904 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:23:17.904 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 71586 01:23:17.904 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71586 ']' 01:23:17.904 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71586 01:23:17.904 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 01:23:17.904 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:23:17.904 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71586 01:23:17.904 killing process with pid 71586 01:23:17.904 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 01:23:17.904 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 01:23:17.904 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71586' 01:23:17.904 Received shutdown signal, test time was about 10.000000 seconds 01:23:17.904 01:23:17.904 Latency(us) 01:23:17.904 [2024-12-09T05:18:00.360Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:23:17.904 [2024-12-09T05:18:00.360Z] =================================================================================================================== 01:23:17.904 [2024-12-09T05:18:00.360Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:23:17.904 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71586 01:23:17.904 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71586 01:23:17.904 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.rund1qqybI 01:23:17.904 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.rund1qqybI 01:23:17.904 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 01:23:17.904 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.rund1qqybI 01:23:17.904 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 01:23:17.904 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:23:17.904 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 01:23:17.904 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:23:17.904 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.rund1qqybI 01:23:17.904 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 01:23:17.904 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 01:23:17.904 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 01:23:17.904 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.rund1qqybI 01:23:17.904 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:23:17.904 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71716 01:23:17.904 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 01:23:17.904 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:23:17.904 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71716 /var/tmp/bdevperf.sock 01:23:17.904 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71716 ']' 01:23:17.904 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:23:17.904 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 01:23:17.904 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:23:17.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:23:17.904 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 01:23:17.904 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:23:17.904 [2024-12-09 05:18:00.312186] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:23:17.904 [2024-12-09 05:18:00.312309] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71716 ] 01:23:18.163 [2024-12-09 05:18:00.454205] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:23:18.163 [2024-12-09 05:18:00.508914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:23:18.163 [2024-12-09 05:18:00.550229] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:23:18.733 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:23:18.733 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 01:23:18.991 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.rund1qqybI 01:23:18.991 [2024-12-09 05:18:01.347412] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.rund1qqybI': 0100666 01:23:18.991 [2024-12-09 05:18:01.347552] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 01:23:18.991 request: 01:23:18.991 { 01:23:18.991 "name": "key0", 01:23:18.991 "path": "/tmp/tmp.rund1qqybI", 01:23:18.991 "method": "keyring_file_add_key", 01:23:18.991 "req_id": 1 01:23:18.991 } 01:23:18.991 Got JSON-RPC error response 01:23:18.991 response: 01:23:18.991 { 01:23:18.991 "code": -1, 01:23:18.991 "message": "Operation not permitted" 01:23:18.991 } 01:23:18.991 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 01:23:19.249 [2024-12-09 05:18:01.579106] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:23:19.249 [2024-12-09 05:18:01.579259] bdev_nvme.c:6722:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 01:23:19.249 request: 01:23:19.249 { 01:23:19.249 "name": "TLSTEST", 01:23:19.249 "trtype": "tcp", 01:23:19.249 "traddr": "10.0.0.3", 01:23:19.249 "adrfam": "ipv4", 01:23:19.249 "trsvcid": "4420", 01:23:19.249 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:23:19.249 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:23:19.249 "prchk_reftag": false, 01:23:19.249 "prchk_guard": false, 01:23:19.249 "hdgst": false, 01:23:19.249 "ddgst": false, 01:23:19.249 "psk": "key0", 01:23:19.249 "allow_unrecognized_csi": false, 01:23:19.249 "method": "bdev_nvme_attach_controller", 01:23:19.249 "req_id": 1 01:23:19.249 } 01:23:19.249 Got JSON-RPC error response 01:23:19.249 response: 01:23:19.249 { 01:23:19.250 "code": -126, 01:23:19.250 "message": "Required key not available" 01:23:19.250 } 01:23:19.250 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71716 01:23:19.250 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71716 ']' 01:23:19.250 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71716 01:23:19.250 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 01:23:19.250 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:23:19.250 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71716 01:23:19.250 killing process with pid 71716 01:23:19.250 Received shutdown signal, test time was about 10.000000 seconds 01:23:19.250 01:23:19.250 Latency(us) 01:23:19.250 [2024-12-09T05:18:01.706Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:23:19.250 [2024-12-09T05:18:01.706Z] =================================================================================================================== 01:23:19.250 [2024-12-09T05:18:01.706Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 01:23:19.250 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 01:23:19.250 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 01:23:19.250 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71716' 01:23:19.250 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71716 01:23:19.250 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71716 01:23:19.508 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 01:23:19.508 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 01:23:19.508 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:23:19.508 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:23:19.508 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:23:19.508 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 71530 01:23:19.508 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71530 ']' 01:23:19.508 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71530 01:23:19.508 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 01:23:19.508 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:23:19.508 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71530 01:23:19.508 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:23:19.508 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:23:19.508 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71530' 01:23:19.508 killing process with pid 71530 01:23:19.508 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71530 01:23:19.508 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71530 01:23:19.782 05:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 01:23:19.782 05:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:23:19.782 05:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 01:23:19.782 05:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:23:19.782 05:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71755 01:23:19.782 05:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 01:23:19.782 05:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71755 01:23:19.782 05:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71755 ']' 01:23:19.782 05:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:23:19.782 05:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 01:23:19.782 05:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:23:19.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:23:19.782 05:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 01:23:19.782 05:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:23:19.782 [2024-12-09 05:18:02.140855] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:23:19.782 [2024-12-09 05:18:02.140999] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:23:20.057 [2024-12-09 05:18:02.291420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:23:20.057 [2024-12-09 05:18:02.333576] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:23:20.057 [2024-12-09 05:18:02.333705] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:23:20.057 [2024-12-09 05:18:02.333730] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:23:20.057 [2024-12-09 05:18:02.333735] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:23:20.057 [2024-12-09 05:18:02.333739] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:23:20.057 [2024-12-09 05:18:02.334022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:23:20.057 [2024-12-09 05:18:02.374465] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:23:20.625 05:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:23:20.625 05:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 01:23:20.625 05:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:23:20.625 05:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 01:23:20.625 05:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:23:20.625 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:23:20.625 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.rund1qqybI 01:23:20.625 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 01:23:20.625 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.rund1qqybI 01:23:20.625 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 01:23:20.625 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:23:20.626 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 01:23:20.626 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:23:20.626 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.rund1qqybI 01:23:20.626 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.rund1qqybI 01:23:20.626 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 01:23:20.884 [2024-12-09 05:18:03.235676] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:23:20.884 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 01:23:21.159 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 01:23:21.418 [2024-12-09 05:18:03.642961] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:23:21.418 [2024-12-09 05:18:03.643164] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:23:21.418 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 01:23:21.418 malloc0 01:23:21.418 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 01:23:21.676 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.rund1qqybI 01:23:21.934 [2024-12-09 05:18:04.262255] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.rund1qqybI': 0100666 01:23:21.935 [2024-12-09 05:18:04.262298] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 01:23:21.935 request: 01:23:21.935 { 01:23:21.935 "name": "key0", 01:23:21.935 "path": "/tmp/tmp.rund1qqybI", 01:23:21.935 "method": "keyring_file_add_key", 01:23:21.935 "req_id": 1 01:23:21.935 } 01:23:21.935 Got JSON-RPC error response 01:23:21.935 response: 01:23:21.935 { 01:23:21.935 "code": -1, 01:23:21.935 "message": "Operation not permitted" 01:23:21.935 } 01:23:21.935 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 01:23:22.193 [2024-12-09 05:18:04.465913] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 01:23:22.193 [2024-12-09 05:18:04.465971] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 01:23:22.193 request: 01:23:22.193 { 01:23:22.193 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:23:22.193 "host": "nqn.2016-06.io.spdk:host1", 01:23:22.193 "psk": "key0", 01:23:22.193 "method": "nvmf_subsystem_add_host", 01:23:22.193 "req_id": 1 01:23:22.193 } 01:23:22.193 Got JSON-RPC error response 01:23:22.193 response: 01:23:22.193 { 01:23:22.193 "code": -32603, 01:23:22.193 "message": "Internal error" 01:23:22.193 } 01:23:22.193 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 01:23:22.193 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:23:22.193 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:23:22.193 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:23:22.193 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 71755 01:23:22.193 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71755 ']' 01:23:22.193 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71755 01:23:22.193 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 01:23:22.193 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:23:22.193 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71755 01:23:22.193 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:23:22.193 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:23:22.193 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71755' 01:23:22.193 killing process with pid 71755 01:23:22.193 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71755 01:23:22.193 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71755 01:23:22.453 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.rund1qqybI 01:23:22.453 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 01:23:22.453 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:23:22.453 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 01:23:22.453 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:23:22.453 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71819 01:23:22.453 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 01:23:22.453 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71819 01:23:22.453 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71819 ']' 01:23:22.453 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:23:22.453 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 01:23:22.453 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:23:22.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:23:22.453 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 01:23:22.453 05:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:23:22.453 [2024-12-09 05:18:04.790302] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:23:22.453 [2024-12-09 05:18:04.790455] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:23:22.712 [2024-12-09 05:18:04.940255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:23:22.712 [2024-12-09 05:18:04.991807] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:23:22.712 [2024-12-09 05:18:04.991855] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:23:22.712 [2024-12-09 05:18:04.991861] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:23:22.712 [2024-12-09 05:18:04.991866] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:23:22.712 [2024-12-09 05:18:04.991870] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:23:22.712 [2024-12-09 05:18:04.992136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:23:22.712 [2024-12-09 05:18:05.033617] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:23:23.278 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:23:23.278 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 01:23:23.278 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:23:23.278 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 01:23:23.278 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:23:23.278 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:23:23.278 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.rund1qqybI 01:23:23.278 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.rund1qqybI 01:23:23.278 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 01:23:23.537 [2024-12-09 05:18:05.890639] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:23:23.537 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 01:23:23.796 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 01:23:24.055 [2024-12-09 05:18:06.313957] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:23:24.055 [2024-12-09 05:18:06.314183] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:23:24.055 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 01:23:24.315 malloc0 01:23:24.315 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 01:23:24.315 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.rund1qqybI 01:23:24.574 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 01:23:24.834 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=71870 01:23:24.834 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 01:23:24.834 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:23:24.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:23:24.834 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 71870 /var/tmp/bdevperf.sock 01:23:24.834 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71870 ']' 01:23:24.834 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:23:24.834 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 01:23:24.834 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:23:24.834 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 01:23:24.834 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:23:24.834 [2024-12-09 05:18:07.191607] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:23:24.834 [2024-12-09 05:18:07.191682] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71870 ] 01:23:25.093 [2024-12-09 05:18:07.345700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:23:25.093 [2024-12-09 05:18:07.389017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:23:25.093 [2024-12-09 05:18:07.429506] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:23:25.662 05:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:23:25.662 05:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 01:23:25.662 05:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.rund1qqybI 01:23:25.921 05:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 01:23:26.180 [2024-12-09 05:18:08.434237] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:23:26.180 TLSTESTn1 01:23:26.180 05:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 01:23:26.440 05:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 01:23:26.440 "subsystems": [ 01:23:26.440 { 01:23:26.440 "subsystem": "keyring", 01:23:26.440 "config": [ 01:23:26.440 { 01:23:26.440 "method": "keyring_file_add_key", 01:23:26.440 "params": { 01:23:26.440 "name": "key0", 01:23:26.440 "path": "/tmp/tmp.rund1qqybI" 01:23:26.440 } 01:23:26.440 } 01:23:26.440 ] 01:23:26.440 }, 01:23:26.440 { 01:23:26.440 "subsystem": "iobuf", 01:23:26.440 "config": [ 01:23:26.440 { 01:23:26.440 "method": "iobuf_set_options", 01:23:26.440 "params": { 01:23:26.440 "small_pool_count": 8192, 01:23:26.440 "large_pool_count": 1024, 01:23:26.440 "small_bufsize": 8192, 01:23:26.440 "large_bufsize": 135168, 01:23:26.440 "enable_numa": false 01:23:26.440 } 01:23:26.440 } 01:23:26.440 ] 01:23:26.440 }, 01:23:26.440 { 01:23:26.440 "subsystem": "sock", 01:23:26.440 "config": [ 01:23:26.440 { 01:23:26.440 "method": "sock_set_default_impl", 01:23:26.440 "params": { 01:23:26.440 "impl_name": "uring" 01:23:26.440 } 01:23:26.440 }, 01:23:26.440 { 01:23:26.440 "method": "sock_impl_set_options", 01:23:26.440 "params": { 01:23:26.440 "impl_name": "ssl", 01:23:26.440 "recv_buf_size": 4096, 01:23:26.440 "send_buf_size": 4096, 01:23:26.440 "enable_recv_pipe": true, 01:23:26.440 "enable_quickack": false, 01:23:26.440 "enable_placement_id": 0, 01:23:26.440 "enable_zerocopy_send_server": true, 01:23:26.440 "enable_zerocopy_send_client": false, 01:23:26.440 "zerocopy_threshold": 0, 01:23:26.440 "tls_version": 0, 01:23:26.440 "enable_ktls": false 01:23:26.440 } 01:23:26.440 }, 01:23:26.440 { 01:23:26.440 "method": "sock_impl_set_options", 01:23:26.440 "params": { 01:23:26.440 "impl_name": "posix", 01:23:26.440 "recv_buf_size": 2097152, 01:23:26.440 "send_buf_size": 2097152, 01:23:26.440 "enable_recv_pipe": true, 01:23:26.440 "enable_quickack": false, 01:23:26.440 "enable_placement_id": 0, 01:23:26.440 "enable_zerocopy_send_server": true, 01:23:26.440 "enable_zerocopy_send_client": false, 01:23:26.440 "zerocopy_threshold": 0, 01:23:26.440 "tls_version": 0, 01:23:26.440 "enable_ktls": false 01:23:26.440 } 01:23:26.440 }, 01:23:26.440 { 01:23:26.440 "method": "sock_impl_set_options", 01:23:26.441 "params": { 01:23:26.441 "impl_name": "uring", 01:23:26.441 "recv_buf_size": 2097152, 01:23:26.441 "send_buf_size": 2097152, 01:23:26.441 "enable_recv_pipe": true, 01:23:26.441 "enable_quickack": false, 01:23:26.441 "enable_placement_id": 0, 01:23:26.441 "enable_zerocopy_send_server": false, 01:23:26.441 "enable_zerocopy_send_client": false, 01:23:26.441 "zerocopy_threshold": 0, 01:23:26.441 "tls_version": 0, 01:23:26.441 "enable_ktls": false 01:23:26.441 } 01:23:26.441 } 01:23:26.441 ] 01:23:26.441 }, 01:23:26.441 { 01:23:26.441 "subsystem": "vmd", 01:23:26.441 "config": [] 01:23:26.441 }, 01:23:26.441 { 01:23:26.441 "subsystem": "accel", 01:23:26.441 "config": [ 01:23:26.441 { 01:23:26.441 "method": "accel_set_options", 01:23:26.441 "params": { 01:23:26.441 "small_cache_size": 128, 01:23:26.441 "large_cache_size": 16, 01:23:26.441 "task_count": 2048, 01:23:26.441 "sequence_count": 2048, 01:23:26.441 "buf_count": 2048 01:23:26.441 } 01:23:26.441 } 01:23:26.441 ] 01:23:26.441 }, 01:23:26.441 { 01:23:26.441 "subsystem": "bdev", 01:23:26.441 "config": [ 01:23:26.441 { 01:23:26.441 "method": "bdev_set_options", 01:23:26.441 "params": { 01:23:26.441 "bdev_io_pool_size": 65535, 01:23:26.441 "bdev_io_cache_size": 256, 01:23:26.441 "bdev_auto_examine": true, 01:23:26.441 "iobuf_small_cache_size": 128, 01:23:26.441 "iobuf_large_cache_size": 16 01:23:26.441 } 01:23:26.441 }, 01:23:26.441 { 01:23:26.441 "method": "bdev_raid_set_options", 01:23:26.441 "params": { 01:23:26.441 "process_window_size_kb": 1024, 01:23:26.441 "process_max_bandwidth_mb_sec": 0 01:23:26.441 } 01:23:26.441 }, 01:23:26.441 { 01:23:26.441 "method": "bdev_iscsi_set_options", 01:23:26.441 "params": { 01:23:26.441 "timeout_sec": 30 01:23:26.441 } 01:23:26.441 }, 01:23:26.441 { 01:23:26.441 "method": "bdev_nvme_set_options", 01:23:26.441 "params": { 01:23:26.441 "action_on_timeout": "none", 01:23:26.441 "timeout_us": 0, 01:23:26.441 "timeout_admin_us": 0, 01:23:26.441 "keep_alive_timeout_ms": 10000, 01:23:26.441 "arbitration_burst": 0, 01:23:26.441 "low_priority_weight": 0, 01:23:26.441 "medium_priority_weight": 0, 01:23:26.441 "high_priority_weight": 0, 01:23:26.441 "nvme_adminq_poll_period_us": 10000, 01:23:26.441 "nvme_ioq_poll_period_us": 0, 01:23:26.441 "io_queue_requests": 0, 01:23:26.441 "delay_cmd_submit": true, 01:23:26.441 "transport_retry_count": 4, 01:23:26.441 "bdev_retry_count": 3, 01:23:26.441 "transport_ack_timeout": 0, 01:23:26.441 "ctrlr_loss_timeout_sec": 0, 01:23:26.441 "reconnect_delay_sec": 0, 01:23:26.441 "fast_io_fail_timeout_sec": 0, 01:23:26.441 "disable_auto_failback": false, 01:23:26.441 "generate_uuids": false, 01:23:26.441 "transport_tos": 0, 01:23:26.441 "nvme_error_stat": false, 01:23:26.441 "rdma_srq_size": 0, 01:23:26.441 "io_path_stat": false, 01:23:26.441 "allow_accel_sequence": false, 01:23:26.441 "rdma_max_cq_size": 0, 01:23:26.441 "rdma_cm_event_timeout_ms": 0, 01:23:26.441 "dhchap_digests": [ 01:23:26.441 "sha256", 01:23:26.441 "sha384", 01:23:26.441 "sha512" 01:23:26.441 ], 01:23:26.441 "dhchap_dhgroups": [ 01:23:26.441 "null", 01:23:26.441 "ffdhe2048", 01:23:26.441 "ffdhe3072", 01:23:26.441 "ffdhe4096", 01:23:26.441 "ffdhe6144", 01:23:26.441 "ffdhe8192" 01:23:26.441 ] 01:23:26.441 } 01:23:26.441 }, 01:23:26.441 { 01:23:26.441 "method": "bdev_nvme_set_hotplug", 01:23:26.441 "params": { 01:23:26.441 "period_us": 100000, 01:23:26.441 "enable": false 01:23:26.441 } 01:23:26.441 }, 01:23:26.441 { 01:23:26.441 "method": "bdev_malloc_create", 01:23:26.441 "params": { 01:23:26.441 "name": "malloc0", 01:23:26.441 "num_blocks": 8192, 01:23:26.441 "block_size": 4096, 01:23:26.441 "physical_block_size": 4096, 01:23:26.441 "uuid": "8b4a6dbf-0ea3-4fca-ba25-eb7192e1fdfe", 01:23:26.441 "optimal_io_boundary": 0, 01:23:26.441 "md_size": 0, 01:23:26.441 "dif_type": 0, 01:23:26.441 "dif_is_head_of_md": false, 01:23:26.441 "dif_pi_format": 0 01:23:26.441 } 01:23:26.441 }, 01:23:26.441 { 01:23:26.441 "method": "bdev_wait_for_examine" 01:23:26.441 } 01:23:26.441 ] 01:23:26.441 }, 01:23:26.441 { 01:23:26.441 "subsystem": "nbd", 01:23:26.441 "config": [] 01:23:26.441 }, 01:23:26.441 { 01:23:26.441 "subsystem": "scheduler", 01:23:26.441 "config": [ 01:23:26.441 { 01:23:26.441 "method": "framework_set_scheduler", 01:23:26.441 "params": { 01:23:26.441 "name": "static" 01:23:26.441 } 01:23:26.441 } 01:23:26.441 ] 01:23:26.441 }, 01:23:26.441 { 01:23:26.441 "subsystem": "nvmf", 01:23:26.441 "config": [ 01:23:26.441 { 01:23:26.441 "method": "nvmf_set_config", 01:23:26.441 "params": { 01:23:26.441 "discovery_filter": "match_any", 01:23:26.441 "admin_cmd_passthru": { 01:23:26.441 "identify_ctrlr": false 01:23:26.441 }, 01:23:26.441 "dhchap_digests": [ 01:23:26.441 "sha256", 01:23:26.441 "sha384", 01:23:26.441 "sha512" 01:23:26.441 ], 01:23:26.441 "dhchap_dhgroups": [ 01:23:26.441 "null", 01:23:26.441 "ffdhe2048", 01:23:26.441 "ffdhe3072", 01:23:26.441 "ffdhe4096", 01:23:26.441 "ffdhe6144", 01:23:26.441 "ffdhe8192" 01:23:26.441 ] 01:23:26.441 } 01:23:26.441 }, 01:23:26.441 { 01:23:26.441 "method": "nvmf_set_max_subsystems", 01:23:26.441 "params": { 01:23:26.441 "max_subsystems": 1024 01:23:26.441 } 01:23:26.441 }, 01:23:26.441 { 01:23:26.441 "method": "nvmf_set_crdt", 01:23:26.441 "params": { 01:23:26.441 "crdt1": 0, 01:23:26.441 "crdt2": 0, 01:23:26.441 "crdt3": 0 01:23:26.441 } 01:23:26.441 }, 01:23:26.441 { 01:23:26.441 "method": "nvmf_create_transport", 01:23:26.441 "params": { 01:23:26.441 "trtype": "TCP", 01:23:26.441 "max_queue_depth": 128, 01:23:26.441 "max_io_qpairs_per_ctrlr": 127, 01:23:26.441 "in_capsule_data_size": 4096, 01:23:26.441 "max_io_size": 131072, 01:23:26.441 "io_unit_size": 131072, 01:23:26.441 "max_aq_depth": 128, 01:23:26.441 "num_shared_buffers": 511, 01:23:26.441 "buf_cache_size": 4294967295, 01:23:26.441 "dif_insert_or_strip": false, 01:23:26.441 "zcopy": false, 01:23:26.441 "c2h_success": false, 01:23:26.441 "sock_priority": 0, 01:23:26.441 "abort_timeout_sec": 1, 01:23:26.441 "ack_timeout": 0, 01:23:26.441 "data_wr_pool_size": 0 01:23:26.441 } 01:23:26.441 }, 01:23:26.441 { 01:23:26.441 "method": "nvmf_create_subsystem", 01:23:26.441 "params": { 01:23:26.441 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:23:26.441 "allow_any_host": false, 01:23:26.441 "serial_number": "SPDK00000000000001", 01:23:26.441 "model_number": "SPDK bdev Controller", 01:23:26.441 "max_namespaces": 10, 01:23:26.441 "min_cntlid": 1, 01:23:26.441 "max_cntlid": 65519, 01:23:26.441 "ana_reporting": false 01:23:26.441 } 01:23:26.441 }, 01:23:26.441 { 01:23:26.441 "method": "nvmf_subsystem_add_host", 01:23:26.441 "params": { 01:23:26.441 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:23:26.442 "host": "nqn.2016-06.io.spdk:host1", 01:23:26.442 "psk": "key0" 01:23:26.442 } 01:23:26.442 }, 01:23:26.442 { 01:23:26.442 "method": "nvmf_subsystem_add_ns", 01:23:26.442 "params": { 01:23:26.442 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:23:26.442 "namespace": { 01:23:26.442 "nsid": 1, 01:23:26.442 "bdev_name": "malloc0", 01:23:26.442 "nguid": "8B4A6DBF0EA34FCABA25EB7192E1FDFE", 01:23:26.442 "uuid": "8b4a6dbf-0ea3-4fca-ba25-eb7192e1fdfe", 01:23:26.442 "no_auto_visible": false 01:23:26.442 } 01:23:26.442 } 01:23:26.442 }, 01:23:26.442 { 01:23:26.442 "method": "nvmf_subsystem_add_listener", 01:23:26.442 "params": { 01:23:26.442 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:23:26.442 "listen_address": { 01:23:26.442 "trtype": "TCP", 01:23:26.442 "adrfam": "IPv4", 01:23:26.442 "traddr": "10.0.0.3", 01:23:26.442 "trsvcid": "4420" 01:23:26.442 }, 01:23:26.442 "secure_channel": true 01:23:26.442 } 01:23:26.442 } 01:23:26.442 ] 01:23:26.442 } 01:23:26.442 ] 01:23:26.442 }' 01:23:26.442 05:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 01:23:26.700 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 01:23:26.701 "subsystems": [ 01:23:26.701 { 01:23:26.701 "subsystem": "keyring", 01:23:26.701 "config": [ 01:23:26.701 { 01:23:26.701 "method": "keyring_file_add_key", 01:23:26.701 "params": { 01:23:26.701 "name": "key0", 01:23:26.701 "path": "/tmp/tmp.rund1qqybI" 01:23:26.701 } 01:23:26.701 } 01:23:26.701 ] 01:23:26.701 }, 01:23:26.701 { 01:23:26.701 "subsystem": "iobuf", 01:23:26.701 "config": [ 01:23:26.701 { 01:23:26.701 "method": "iobuf_set_options", 01:23:26.701 "params": { 01:23:26.701 "small_pool_count": 8192, 01:23:26.701 "large_pool_count": 1024, 01:23:26.701 "small_bufsize": 8192, 01:23:26.701 "large_bufsize": 135168, 01:23:26.701 "enable_numa": false 01:23:26.701 } 01:23:26.701 } 01:23:26.701 ] 01:23:26.701 }, 01:23:26.701 { 01:23:26.701 "subsystem": "sock", 01:23:26.701 "config": [ 01:23:26.701 { 01:23:26.701 "method": "sock_set_default_impl", 01:23:26.701 "params": { 01:23:26.701 "impl_name": "uring" 01:23:26.701 } 01:23:26.701 }, 01:23:26.701 { 01:23:26.701 "method": "sock_impl_set_options", 01:23:26.701 "params": { 01:23:26.701 "impl_name": "ssl", 01:23:26.701 "recv_buf_size": 4096, 01:23:26.701 "send_buf_size": 4096, 01:23:26.701 "enable_recv_pipe": true, 01:23:26.701 "enable_quickack": false, 01:23:26.701 "enable_placement_id": 0, 01:23:26.701 "enable_zerocopy_send_server": true, 01:23:26.701 "enable_zerocopy_send_client": false, 01:23:26.701 "zerocopy_threshold": 0, 01:23:26.701 "tls_version": 0, 01:23:26.701 "enable_ktls": false 01:23:26.701 } 01:23:26.701 }, 01:23:26.701 { 01:23:26.701 "method": "sock_impl_set_options", 01:23:26.701 "params": { 01:23:26.701 "impl_name": "posix", 01:23:26.701 "recv_buf_size": 2097152, 01:23:26.701 "send_buf_size": 2097152, 01:23:26.701 "enable_recv_pipe": true, 01:23:26.701 "enable_quickack": false, 01:23:26.701 "enable_placement_id": 0, 01:23:26.701 "enable_zerocopy_send_server": true, 01:23:26.701 "enable_zerocopy_send_client": false, 01:23:26.701 "zerocopy_threshold": 0, 01:23:26.701 "tls_version": 0, 01:23:26.701 "enable_ktls": false 01:23:26.701 } 01:23:26.701 }, 01:23:26.701 { 01:23:26.701 "method": "sock_impl_set_options", 01:23:26.701 "params": { 01:23:26.701 "impl_name": "uring", 01:23:26.701 "recv_buf_size": 2097152, 01:23:26.701 "send_buf_size": 2097152, 01:23:26.701 "enable_recv_pipe": true, 01:23:26.701 "enable_quickack": false, 01:23:26.701 "enable_placement_id": 0, 01:23:26.701 "enable_zerocopy_send_server": false, 01:23:26.701 "enable_zerocopy_send_client": false, 01:23:26.701 "zerocopy_threshold": 0, 01:23:26.701 "tls_version": 0, 01:23:26.701 "enable_ktls": false 01:23:26.701 } 01:23:26.701 } 01:23:26.701 ] 01:23:26.701 }, 01:23:26.701 { 01:23:26.701 "subsystem": "vmd", 01:23:26.701 "config": [] 01:23:26.701 }, 01:23:26.701 { 01:23:26.701 "subsystem": "accel", 01:23:26.701 "config": [ 01:23:26.701 { 01:23:26.701 "method": "accel_set_options", 01:23:26.701 "params": { 01:23:26.701 "small_cache_size": 128, 01:23:26.701 "large_cache_size": 16, 01:23:26.701 "task_count": 2048, 01:23:26.701 "sequence_count": 2048, 01:23:26.701 "buf_count": 2048 01:23:26.701 } 01:23:26.701 } 01:23:26.701 ] 01:23:26.701 }, 01:23:26.701 { 01:23:26.701 "subsystem": "bdev", 01:23:26.701 "config": [ 01:23:26.701 { 01:23:26.701 "method": "bdev_set_options", 01:23:26.701 "params": { 01:23:26.701 "bdev_io_pool_size": 65535, 01:23:26.701 "bdev_io_cache_size": 256, 01:23:26.701 "bdev_auto_examine": true, 01:23:26.701 "iobuf_small_cache_size": 128, 01:23:26.701 "iobuf_large_cache_size": 16 01:23:26.701 } 01:23:26.701 }, 01:23:26.701 { 01:23:26.701 "method": "bdev_raid_set_options", 01:23:26.701 "params": { 01:23:26.701 "process_window_size_kb": 1024, 01:23:26.701 "process_max_bandwidth_mb_sec": 0 01:23:26.701 } 01:23:26.701 }, 01:23:26.701 { 01:23:26.701 "method": "bdev_iscsi_set_options", 01:23:26.701 "params": { 01:23:26.701 "timeout_sec": 30 01:23:26.701 } 01:23:26.701 }, 01:23:26.701 { 01:23:26.701 "method": "bdev_nvme_set_options", 01:23:26.701 "params": { 01:23:26.701 "action_on_timeout": "none", 01:23:26.701 "timeout_us": 0, 01:23:26.701 "timeout_admin_us": 0, 01:23:26.701 "keep_alive_timeout_ms": 10000, 01:23:26.701 "arbitration_burst": 0, 01:23:26.701 "low_priority_weight": 0, 01:23:26.701 "medium_priority_weight": 0, 01:23:26.701 "high_priority_weight": 0, 01:23:26.701 "nvme_adminq_poll_period_us": 10000, 01:23:26.701 "nvme_ioq_poll_period_us": 0, 01:23:26.701 "io_queue_requests": 512, 01:23:26.701 "delay_cmd_submit": true, 01:23:26.701 "transport_retry_count": 4, 01:23:26.701 "bdev_retry_count": 3, 01:23:26.701 "transport_ack_timeout": 0, 01:23:26.701 "ctrlr_loss_timeout_sec": 0, 01:23:26.701 "reconnect_delay_sec": 0, 01:23:26.701 "fast_io_fail_timeout_sec": 0, 01:23:26.701 "disable_auto_failback": false, 01:23:26.701 "generate_uuids": false, 01:23:26.701 "transport_tos": 0, 01:23:26.701 "nvme_error_stat": false, 01:23:26.701 "rdma_srq_size": 0, 01:23:26.701 "io_path_stat": false, 01:23:26.701 "allow_accel_sequence": false, 01:23:26.701 "rdma_max_cq_size": 0, 01:23:26.701 "rdma_cm_event_timeout_ms": 0, 01:23:26.701 "dhchap_digests": [ 01:23:26.701 "sha256", 01:23:26.701 "sha384", 01:23:26.701 "sha512" 01:23:26.701 ], 01:23:26.701 "dhchap_dhgroups": [ 01:23:26.701 "null", 01:23:26.701 "ffdhe2048", 01:23:26.701 "ffdhe3072", 01:23:26.701 "ffdhe4096", 01:23:26.701 "ffdhe6144", 01:23:26.701 "ffdhe8192" 01:23:26.701 ] 01:23:26.701 } 01:23:26.701 }, 01:23:26.701 { 01:23:26.701 "method": "bdev_nvme_attach_controller", 01:23:26.701 "params": { 01:23:26.701 "name": "TLSTEST", 01:23:26.701 "trtype": "TCP", 01:23:26.701 "adrfam": "IPv4", 01:23:26.701 "traddr": "10.0.0.3", 01:23:26.701 "trsvcid": "4420", 01:23:26.701 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:23:26.701 "prchk_reftag": false, 01:23:26.701 "prchk_guard": false, 01:23:26.701 "ctrlr_loss_timeout_sec": 0, 01:23:26.701 "reconnect_delay_sec": 0, 01:23:26.701 "fast_io_fail_timeout_sec": 0, 01:23:26.701 "psk": "key0", 01:23:26.701 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:23:26.701 "hdgst": false, 01:23:26.701 "ddgst": false, 01:23:26.701 "multipath": "multipath" 01:23:26.701 } 01:23:26.701 }, 01:23:26.701 { 01:23:26.701 "method": "bdev_nvme_set_hotplug", 01:23:26.701 "params": { 01:23:26.701 "period_us": 100000, 01:23:26.701 "enable": false 01:23:26.701 } 01:23:26.701 }, 01:23:26.701 { 01:23:26.701 "method": "bdev_wait_for_examine" 01:23:26.701 } 01:23:26.701 ] 01:23:26.701 }, 01:23:26.701 { 01:23:26.701 "subsystem": "nbd", 01:23:26.701 "config": [] 01:23:26.701 } 01:23:26.701 ] 01:23:26.701 }' 01:23:26.701 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 71870 01:23:26.701 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71870 ']' 01:23:26.701 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71870 01:23:26.701 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 01:23:26.701 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:23:26.701 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71870 01:23:26.701 killing process with pid 71870 01:23:26.701 Received shutdown signal, test time was about 10.000000 seconds 01:23:26.701 01:23:26.701 Latency(us) 01:23:26.701 [2024-12-09T05:18:09.157Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:23:26.701 [2024-12-09T05:18:09.158Z] =================================================================================================================== 01:23:26.702 [2024-12-09T05:18:09.158Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 01:23:26.702 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 01:23:26.702 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 01:23:26.702 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71870' 01:23:26.702 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71870 01:23:26.702 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71870 01:23:26.959 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 71819 01:23:26.959 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71819 ']' 01:23:26.959 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71819 01:23:26.959 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 01:23:26.959 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:23:26.959 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71819 01:23:26.959 killing process with pid 71819 01:23:26.959 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:23:26.959 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:23:26.959 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71819' 01:23:26.959 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71819 01:23:26.959 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71819 01:23:27.526 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 01:23:27.526 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:23:27.526 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 01:23:27.526 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:23:27.526 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 01:23:27.526 "subsystems": [ 01:23:27.526 { 01:23:27.526 "subsystem": "keyring", 01:23:27.526 "config": [ 01:23:27.526 { 01:23:27.526 "method": "keyring_file_add_key", 01:23:27.526 "params": { 01:23:27.526 "name": "key0", 01:23:27.526 "path": "/tmp/tmp.rund1qqybI" 01:23:27.526 } 01:23:27.526 } 01:23:27.526 ] 01:23:27.526 }, 01:23:27.526 { 01:23:27.526 "subsystem": "iobuf", 01:23:27.526 "config": [ 01:23:27.526 { 01:23:27.526 "method": "iobuf_set_options", 01:23:27.526 "params": { 01:23:27.526 "small_pool_count": 8192, 01:23:27.526 "large_pool_count": 1024, 01:23:27.526 "small_bufsize": 8192, 01:23:27.526 "large_bufsize": 135168, 01:23:27.526 "enable_numa": false 01:23:27.526 } 01:23:27.526 } 01:23:27.526 ] 01:23:27.526 }, 01:23:27.526 { 01:23:27.526 "subsystem": "sock", 01:23:27.526 "config": [ 01:23:27.526 { 01:23:27.526 "method": "sock_set_default_impl", 01:23:27.526 "params": { 01:23:27.526 "impl_name": "uring" 01:23:27.526 } 01:23:27.526 }, 01:23:27.526 { 01:23:27.526 "method": "sock_impl_set_options", 01:23:27.526 "params": { 01:23:27.526 "impl_name": "ssl", 01:23:27.526 "recv_buf_size": 4096, 01:23:27.526 "send_buf_size": 4096, 01:23:27.526 "enable_recv_pipe": true, 01:23:27.526 "enable_quickack": false, 01:23:27.526 "enable_placement_id": 0, 01:23:27.526 "enable_zerocopy_send_server": true, 01:23:27.526 "enable_zerocopy_send_client": false, 01:23:27.526 "zerocopy_threshold": 0, 01:23:27.526 "tls_version": 0, 01:23:27.526 "enable_ktls": false 01:23:27.526 } 01:23:27.526 }, 01:23:27.526 { 01:23:27.526 "method": "sock_impl_set_options", 01:23:27.526 "params": { 01:23:27.526 "impl_name": "posix", 01:23:27.526 "recv_buf_size": 2097152, 01:23:27.526 "send_buf_size": 2097152, 01:23:27.526 "enable_recv_pipe": true, 01:23:27.526 "enable_quickack": false, 01:23:27.526 "enable_placement_id": 0, 01:23:27.526 "enable_zerocopy_send_server": true, 01:23:27.526 "enable_zerocopy_send_client": false, 01:23:27.526 "zerocopy_threshold": 0, 01:23:27.526 "tls_version": 0, 01:23:27.526 "enable_ktls": false 01:23:27.526 } 01:23:27.526 }, 01:23:27.526 { 01:23:27.526 "method": "sock_impl_set_options", 01:23:27.526 "params": { 01:23:27.526 "impl_name": "uring", 01:23:27.526 "recv_buf_size": 2097152, 01:23:27.526 "send_buf_size": 2097152, 01:23:27.526 "enable_recv_pipe": true, 01:23:27.526 "enable_quickack": false, 01:23:27.526 "enable_placement_id": 0, 01:23:27.526 "enable_zerocopy_send_server": false, 01:23:27.526 "enable_zerocopy_send_client": false, 01:23:27.526 "zerocopy_threshold": 0, 01:23:27.526 "tls_version": 0, 01:23:27.526 "enable_ktls": false 01:23:27.526 } 01:23:27.526 } 01:23:27.526 ] 01:23:27.526 }, 01:23:27.526 { 01:23:27.526 "subsystem": "vmd", 01:23:27.526 "config": [] 01:23:27.526 }, 01:23:27.526 { 01:23:27.526 "subsystem": "accel", 01:23:27.526 "config": [ 01:23:27.526 { 01:23:27.526 "method": "accel_set_options", 01:23:27.526 "params": { 01:23:27.526 "small_cache_size": 128, 01:23:27.526 "large_cache_size": 16, 01:23:27.526 "task_count": 2048, 01:23:27.526 "sequence_count": 2048, 01:23:27.526 "buf_count": 2048 01:23:27.526 } 01:23:27.526 } 01:23:27.526 ] 01:23:27.526 }, 01:23:27.526 { 01:23:27.526 "subsystem": "bdev", 01:23:27.526 "config": [ 01:23:27.526 { 01:23:27.526 "method": "bdev_set_options", 01:23:27.526 "params": { 01:23:27.526 "bdev_io_pool_size": 65535, 01:23:27.526 "bdev_io_cache_size": 256, 01:23:27.526 "bdev_auto_examine": true, 01:23:27.526 "iobuf_small_cache_size": 128, 01:23:27.526 "iobuf_large_cache_size": 16 01:23:27.526 } 01:23:27.526 }, 01:23:27.526 { 01:23:27.526 "method": "bdev_raid_set_options", 01:23:27.526 "params": { 01:23:27.526 "process_window_size_kb": 1024, 01:23:27.526 "process_max_bandwidth_mb_sec": 0 01:23:27.526 } 01:23:27.526 }, 01:23:27.526 { 01:23:27.526 "method": "bdev_iscsi_set_options", 01:23:27.526 "params": { 01:23:27.526 "timeout_sec": 30 01:23:27.526 } 01:23:27.526 }, 01:23:27.526 { 01:23:27.526 "method": "bdev_nvme_set_options", 01:23:27.526 "params": { 01:23:27.526 "action_on_timeout": "none", 01:23:27.526 "timeout_us": 0, 01:23:27.526 "timeout_admin_us": 0, 01:23:27.526 "keep_alive_timeout_ms": 10000, 01:23:27.526 "arbitration_burst": 0, 01:23:27.526 "low_priority_weight": 0, 01:23:27.526 "medium_priority_weight": 0, 01:23:27.526 "high_priority_weight": 0, 01:23:27.526 "nvme_adminq_poll_period_us": 10000, 01:23:27.526 "nvme_ioq_poll_period_us": 0, 01:23:27.526 "io_queue_requests": 0, 01:23:27.526 "delay_cmd_submit": true, 01:23:27.526 "transport_retry_count": 4, 01:23:27.526 "bdev_retry_count": 3, 01:23:27.526 "transport_ack_timeout": 0, 01:23:27.526 "ctrlr_loss_timeout_sec": 0, 01:23:27.526 "reconnect_delay_sec": 0, 01:23:27.526 "fast_io_fail_timeout_sec": 0, 01:23:27.526 "disable_auto_failback": false, 01:23:27.526 "generate_uuids": false, 01:23:27.526 "transport_tos": 0, 01:23:27.526 "nvme_error_stat": false, 01:23:27.526 "rdma_srq_size": 0, 01:23:27.526 "io_path_stat": false, 01:23:27.526 "allow_accel_sequence": false, 01:23:27.526 "rdma_max_cq_size": 0, 01:23:27.526 "rdma_cm_event_timeout_ms": 0, 01:23:27.526 "dhchap_digests": [ 01:23:27.526 "sha256", 01:23:27.526 "sha384", 01:23:27.526 "sha512" 01:23:27.526 ], 01:23:27.526 "dhchap_dhgroups": [ 01:23:27.526 "null", 01:23:27.526 "ffdhe2048", 01:23:27.526 "ffdhe3072", 01:23:27.526 "ffdhe4096", 01:23:27.526 "ffdhe6144", 01:23:27.526 "ffdhe8192" 01:23:27.526 ] 01:23:27.526 } 01:23:27.526 }, 01:23:27.526 { 01:23:27.527 "method": "bdev_nvme_set_hotplug", 01:23:27.527 "params": { 01:23:27.527 "period_us": 100000, 01:23:27.527 "enable": false 01:23:27.527 } 01:23:27.527 }, 01:23:27.527 { 01:23:27.527 "method": "bdev_malloc_create", 01:23:27.527 "params": { 01:23:27.527 "name": "malloc0", 01:23:27.527 "num_blocks": 8192, 01:23:27.527 "block_size": 4096, 01:23:27.527 "physical_block_size": 4096, 01:23:27.527 "uuid": "8b4a6dbf-0ea3-4fca-ba25-eb7192e1fdfe", 01:23:27.527 "optimal_io_boundary": 0, 01:23:27.527 "md_size": 0, 01:23:27.527 "dif_type": 0, 01:23:27.527 "dif_is_head_of_md": false, 01:23:27.527 "dif_pi_format": 0 01:23:27.527 } 01:23:27.527 }, 01:23:27.527 { 01:23:27.527 "method": "bdev_wait_for_examine" 01:23:27.527 } 01:23:27.527 ] 01:23:27.527 }, 01:23:27.527 { 01:23:27.527 "subsystem": "nbd", 01:23:27.527 "config": [] 01:23:27.527 }, 01:23:27.527 { 01:23:27.527 "subsystem": "scheduler", 01:23:27.527 "config": [ 01:23:27.527 { 01:23:27.527 "method": "framework_set_scheduler", 01:23:27.527 "params": { 01:23:27.527 "name": "static" 01:23:27.527 } 01:23:27.527 } 01:23:27.527 ] 01:23:27.527 }, 01:23:27.527 { 01:23:27.527 "subsystem": "nvmf", 01:23:27.527 "config": [ 01:23:27.527 { 01:23:27.527 "method": "nvmf_set_config", 01:23:27.527 "params": { 01:23:27.527 "discovery_filter": "match_any", 01:23:27.527 "admin_cmd_passthru": { 01:23:27.527 "identify_ctrlr": false 01:23:27.527 }, 01:23:27.527 "dhchap_digests": [ 01:23:27.527 "sha256", 01:23:27.527 "sha384", 01:23:27.527 "sha512" 01:23:27.527 ], 01:23:27.527 "dhchap_dhgroups": [ 01:23:27.527 "null", 01:23:27.527 "ffdhe2048", 01:23:27.527 "ffdhe3072", 01:23:27.527 "ffdhe4096", 01:23:27.527 "ffdhe6144", 01:23:27.527 "ffdhe8192" 01:23:27.527 ] 01:23:27.527 } 01:23:27.527 }, 01:23:27.527 { 01:23:27.527 "method": "nvmf_set_max_subsystems", 01:23:27.527 "params": { 01:23:27.527 "max_subsystems": 1024 01:23:27.527 } 01:23:27.527 }, 01:23:27.527 { 01:23:27.527 "method": "nvmf_set_crdt", 01:23:27.527 "params": { 01:23:27.527 "crdt1": 0, 01:23:27.527 "crdt2": 0, 01:23:27.527 "crdt3": 0 01:23:27.527 } 01:23:27.527 }, 01:23:27.527 { 01:23:27.527 "method": "nvmf_create_transport", 01:23:27.527 "params": { 01:23:27.527 "trtype": "TCP", 01:23:27.527 "max_queue_depth": 128, 01:23:27.527 "max_io_qpairs_per_ctrlr": 127, 01:23:27.527 "in_capsule_data_size": 4096, 01:23:27.527 "max_io_size": 131072, 01:23:27.527 "io_unit_size": 131072, 01:23:27.527 "max_aq_depth": 128, 01:23:27.527 "num_shared_buffers": 511, 01:23:27.527 "buf_cache_size": 4294967295, 01:23:27.527 "dif_insert_or_strip": false, 01:23:27.527 "zcopy": false, 01:23:27.527 "c2h_success": false, 01:23:27.527 "sock_priority": 0, 01:23:27.527 "abort_timeout_sec": 1, 01:23:27.527 "ack_timeout": 0, 01:23:27.527 "data_wr_pool_size": 0 01:23:27.527 } 01:23:27.527 }, 01:23:27.527 { 01:23:27.527 "method": "nvmf_create_subsystem", 01:23:27.527 "params": { 01:23:27.527 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:23:27.527 "allow_any_host": false, 01:23:27.527 "serial_number": "SPDK00000000000001", 01:23:27.527 "model_number": "SPDK bdev Controller", 01:23:27.527 "max_namespaces": 10, 01:23:27.527 "min_cntlid": 1, 01:23:27.527 "max_cntlid": 65519, 01:23:27.527 "ana_reporting": false 01:23:27.527 } 01:23:27.527 }, 01:23:27.527 { 01:23:27.527 "method": "nvmf_subsystem_add_host", 01:23:27.527 "params": { 01:23:27.527 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:23:27.527 "host": "nqn.2016-06.io.spdk:host1", 01:23:27.527 "psk": "key0" 01:23:27.527 } 01:23:27.527 }, 01:23:27.527 { 01:23:27.527 "method": "nvmf_subsystem_add_ns", 01:23:27.527 "params": { 01:23:27.527 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:23:27.527 "namespace": { 01:23:27.527 "nsid": 1, 01:23:27.527 "bdev_name": "malloc0", 01:23:27.527 "nguid": "8B4A6DBF0EA34FCABA25EB7192E1FDFE", 01:23:27.527 "uuid": "8b4a6dbf-0ea3-4fca-ba25-eb7192e1fdfe", 01:23:27.527 "no_auto_visible": false 01:23:27.527 } 01:23:27.527 } 01:23:27.527 }, 01:23:27.527 { 01:23:27.527 "method": "nvmf_subsystem_add_listener", 01:23:27.527 "params": { 01:23:27.527 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:23:27.527 "listen_address": { 01:23:27.527 "trtype": "TCP", 01:23:27.527 "adrfam": "IPv4", 01:23:27.527 "traddr": "10.0.0.3", 01:23:27.527 "trsvcid": "4420" 01:23:27.527 }, 01:23:27.527 "secure_channel": true 01:23:27.527 } 01:23:27.527 } 01:23:27.527 ] 01:23:27.527 } 01:23:27.527 ] 01:23:27.527 }' 01:23:27.527 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 01:23:27.527 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71918 01:23:27.527 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71918 01:23:27.527 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71918 ']' 01:23:27.527 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:23:27.527 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 01:23:27.527 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:23:27.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:23:27.527 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 01:23:27.527 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:23:27.527 [2024-12-09 05:18:09.803750] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:23:27.527 [2024-12-09 05:18:09.803888] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:23:27.527 [2024-12-09 05:18:09.955709] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:23:27.787 [2024-12-09 05:18:10.033116] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:23:27.787 [2024-12-09 05:18:10.033307] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:23:27.787 [2024-12-09 05:18:10.033370] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:23:27.787 [2024-12-09 05:18:10.033404] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:23:27.787 [2024-12-09 05:18:10.033448] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:23:27.787 [2024-12-09 05:18:10.033899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:23:27.787 [2024-12-09 05:18:10.227632] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:23:28.045 [2024-12-09 05:18:10.329985] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:23:28.045 [2024-12-09 05:18:10.361854] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:23:28.045 [2024-12-09 05:18:10.362073] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:23:28.304 05:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:23:28.304 05:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 01:23:28.304 05:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:23:28.304 05:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 01:23:28.304 05:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:23:28.304 05:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:23:28.304 05:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=71950 01:23:28.304 05:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 71950 /var/tmp/bdevperf.sock 01:23:28.304 05:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71950 ']' 01:23:28.304 05:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:23:28.304 05:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 01:23:28.304 "subsystems": [ 01:23:28.304 { 01:23:28.304 "subsystem": "keyring", 01:23:28.304 "config": [ 01:23:28.304 { 01:23:28.304 "method": "keyring_file_add_key", 01:23:28.304 "params": { 01:23:28.304 "name": "key0", 01:23:28.304 "path": "/tmp/tmp.rund1qqybI" 01:23:28.304 } 01:23:28.304 } 01:23:28.304 ] 01:23:28.304 }, 01:23:28.304 { 01:23:28.304 "subsystem": "iobuf", 01:23:28.304 "config": [ 01:23:28.304 { 01:23:28.304 "method": "iobuf_set_options", 01:23:28.304 "params": { 01:23:28.304 "small_pool_count": 8192, 01:23:28.304 "large_pool_count": 1024, 01:23:28.304 "small_bufsize": 8192, 01:23:28.304 "large_bufsize": 135168, 01:23:28.304 "enable_numa": false 01:23:28.304 } 01:23:28.304 } 01:23:28.304 ] 01:23:28.304 }, 01:23:28.304 { 01:23:28.304 "subsystem": "sock", 01:23:28.304 "config": [ 01:23:28.304 { 01:23:28.304 "method": "sock_set_default_impl", 01:23:28.304 "params": { 01:23:28.304 "impl_name": "uring" 01:23:28.304 } 01:23:28.304 }, 01:23:28.304 { 01:23:28.304 "method": "sock_impl_set_options", 01:23:28.304 "params": { 01:23:28.304 "impl_name": "ssl", 01:23:28.304 "recv_buf_size": 4096, 01:23:28.304 "send_buf_size": 4096, 01:23:28.304 "enable_recv_pipe": true, 01:23:28.304 "enable_quickack": false, 01:23:28.304 "enable_placement_id": 0, 01:23:28.304 "enable_zerocopy_send_server": true, 01:23:28.304 "enable_zerocopy_send_client": false, 01:23:28.304 "zerocopy_threshold": 0, 01:23:28.304 "tls_version": 0, 01:23:28.304 "enable_ktls": false 01:23:28.304 } 01:23:28.304 }, 01:23:28.304 { 01:23:28.304 "method": "sock_impl_set_options", 01:23:28.304 "params": { 01:23:28.304 "impl_name": "posix", 01:23:28.304 "recv_buf_size": 2097152, 01:23:28.304 "send_buf_size": 2097152, 01:23:28.304 "enable_recv_pipe": true, 01:23:28.304 "enable_quickack": false, 01:23:28.304 "enable_placement_id": 0, 01:23:28.304 "enable_zerocopy_send_server": true, 01:23:28.304 "enable_zerocopy_send_client": false, 01:23:28.304 "zerocopy_threshold": 0, 01:23:28.304 "tls_version": 0, 01:23:28.304 "enable_ktls": false 01:23:28.304 } 01:23:28.304 }, 01:23:28.304 { 01:23:28.304 "method": "sock_impl_set_options", 01:23:28.304 "params": { 01:23:28.304 "impl_name": "uring", 01:23:28.304 "recv_buf_size": 2097152, 01:23:28.304 "send_buf_size": 2097152, 01:23:28.304 "enable_recv_pipe": true, 01:23:28.304 "enable_quickack": false, 01:23:28.304 "enable_placement_id": 0, 01:23:28.305 "enable_zerocopy_send_server": false, 01:23:28.305 "enable_zerocopy_send_client": false, 01:23:28.305 "zerocopy_threshold": 0, 01:23:28.305 "tls_version": 0, 01:23:28.305 "enable_ktls": false 01:23:28.305 } 01:23:28.305 } 01:23:28.305 ] 01:23:28.305 }, 01:23:28.305 { 01:23:28.305 "subsystem": "vmd", 01:23:28.305 "config": [] 01:23:28.305 }, 01:23:28.305 { 01:23:28.305 "subsystem": "accel", 01:23:28.305 "config": [ 01:23:28.305 { 01:23:28.305 "method": "accel_set_options", 01:23:28.305 "params": { 01:23:28.305 "small_cache_size": 128, 01:23:28.305 "large_cache_size": 16, 01:23:28.305 "task_count": 2048, 01:23:28.305 "sequence_count": 2048, 01:23:28.305 "buf_count": 2048 01:23:28.305 } 01:23:28.305 } 01:23:28.305 ] 01:23:28.305 }, 01:23:28.305 { 01:23:28.305 "subsystem": "bdev", 01:23:28.305 "config": [ 01:23:28.305 { 01:23:28.305 "method": "bdev_set_options", 01:23:28.305 "params": { 01:23:28.305 "bdev_io_pool_size": 65535, 01:23:28.305 "bdev_io_cache_size": 256, 01:23:28.305 "bdev_auto_examine": true, 01:23:28.305 "iobuf_small_cache_size": 128, 01:23:28.305 "iobuf_large_cache_size": 16 01:23:28.305 } 01:23:28.305 }, 01:23:28.305 { 01:23:28.305 "method": "bdev_raid_set_options", 01:23:28.305 "params": { 01:23:28.305 "process_window_size_kb": 1024, 01:23:28.305 "process_max_bandwidth_mb_sec": 0 01:23:28.305 } 01:23:28.305 }, 01:23:28.305 { 01:23:28.305 "method": "bdev_iscsi_set_options", 01:23:28.305 "params": { 01:23:28.305 "timeout_sec": 30 01:23:28.305 } 01:23:28.305 }, 01:23:28.305 { 01:23:28.305 "method": "bdev_nvme_set_options", 01:23:28.305 "params": { 01:23:28.305 "action_on_timeout": "none", 01:23:28.305 "timeout_us": 0, 01:23:28.305 "timeout_admin_us": 0, 01:23:28.305 "keep_alive_timeout_ms": 10000, 01:23:28.305 "arbitration_burst": 0, 01:23:28.305 "low_priority_weight": 0, 01:23:28.305 "medium_priority_weight": 0, 01:23:28.305 "high_priority_weight": 0, 01:23:28.305 "nvme_adminq_poll_period_us": 10000, 01:23:28.305 "nvme_ioq_poll_period_us": 0, 01:23:28.305 "io_queue_requests": 512, 01:23:28.305 "delay_cmd_submit": true, 01:23:28.305 "transport_retry_count": 4, 01:23:28.305 "bdev_retry_count": 3, 01:23:28.305 "transport_ack_timeout": 0, 01:23:28.305 "ctrlr_loss_timeout_sec": 0, 01:23:28.305 "reconnect_delay_sec": 0, 01:23:28.305 "fast_io_fail_timeout_sec": 0, 01:23:28.305 "disable_auto_failback": false, 01:23:28.305 "generate_uuids": false, 01:23:28.305 "transport_tos": 0, 01:23:28.305 "nvme_error_stat": false, 01:23:28.305 "rdma_srq_size": 0, 01:23:28.305 "io_path_stat": false, 01:23:28.305 "allow_accel_sequence": false, 01:23:28.305 "rdma_max_cq_size": 0, 01:23:28.305 "rdma_cm_event_timeout_ms": 0, 01:23:28.305 "dhchap_digests": [ 01:23:28.305 "sha256", 01:23:28.305 "sha384", 01:23:28.305 "sha512" 01:23:28.305 ], 01:23:28.305 "dhchap_dhgroups": [ 01:23:28.305 "null", 01:23:28.305 "ffdhe2048", 01:23:28.305 "ffdhe3072", 01:23:28.305 "ffdhe4096", 01:23:28.305 "ffdhe6144", 01:23:28.305 "ffdhe8192" 01:23:28.305 ] 01:23:28.305 } 01:23:28.305 }, 01:23:28.305 { 01:23:28.305 "method": "bdev_nvme_attach_controller", 01:23:28.305 "params": { 01:23:28.305 "name": "TLSTEST", 01:23:28.305 "trtype": "TCP", 01:23:28.305 "adrfam": "IPv4", 01:23:28.305 "traddr": "10.0.0.3", 01:23:28.305 "trsvcid": "4420", 01:23:28.305 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:23:28.305 "prchk_reftag": false, 01:23:28.305 "prchk_guard": false, 01:23:28.305 "ctrlr_loss_timeout_sec": 0, 01:23:28.305 "reconnect_delay_sec": 0, 01:23:28.305 "fast_io_fail_timeout_sec": 0, 01:23:28.305 "psk": "key0", 01:23:28.305 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:23:28.305 "hdgst": false, 01:23:28.305 "ddgst": false, 01:23:28.305 "multipath": "multipath" 01:23:28.305 } 01:23:28.305 }, 01:23:28.305 { 01:23:28.305 "method": "bdev_nvme_set_hotplug", 01:23:28.305 "params": { 01:23:28.305 "period_us": 100000, 01:23:28.305 "enable": false 01:23:28.305 } 01:23:28.305 }, 01:23:28.305 { 01:23:28.305 "method": "bdev_wait_for_examine" 01:23:28.305 } 01:23:28.305 ] 01:23:28.305 }, 01:23:28.305 { 01:23:28.305 "subsystem": "nbd", 01:23:28.305 "config": [] 01:23:28.305 } 01:23:28.305 ] 01:23:28.305 }' 01:23:28.305 05:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 01:23:28.305 05:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 01:23:28.305 05:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:23:28.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:23:28.305 05:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 01:23:28.305 05:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:23:28.305 [2024-12-09 05:18:10.751668] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:23:28.305 [2024-12-09 05:18:10.751787] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71950 ] 01:23:28.563 [2024-12-09 05:18:10.906974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:23:28.564 [2024-12-09 05:18:10.954352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:23:28.821 [2024-12-09 05:18:11.075253] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:23:28.821 [2024-12-09 05:18:11.118099] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:23:29.386 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:23:29.386 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 01:23:29.386 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 01:23:29.386 Running I/O for 10 seconds... 01:23:31.698 5035.00 IOPS, 19.67 MiB/s [2024-12-09T05:18:15.089Z] 5102.50 IOPS, 19.93 MiB/s [2024-12-09T05:18:16.027Z] 5120.33 IOPS, 20.00 MiB/s [2024-12-09T05:18:17.013Z] 5152.25 IOPS, 20.13 MiB/s [2024-12-09T05:18:17.948Z] 5171.20 IOPS, 20.20 MiB/s [2024-12-09T05:18:18.880Z] 5184.00 IOPS, 20.25 MiB/s [2024-12-09T05:18:19.814Z] 5186.29 IOPS, 20.26 MiB/s [2024-12-09T05:18:20.749Z] 5191.00 IOPS, 20.28 MiB/s [2024-12-09T05:18:22.125Z] 5194.89 IOPS, 20.29 MiB/s [2024-12-09T05:18:22.125Z] 5204.90 IOPS, 20.33 MiB/s 01:23:39.669 Latency(us) 01:23:39.669 [2024-12-09T05:18:22.125Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:23:39.669 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 01:23:39.669 Verification LBA range: start 0x0 length 0x2000 01:23:39.669 TLSTESTn1 : 10.02 5209.63 20.35 0.00 0.00 24530.40 5437.48 18888.10 01:23:39.669 [2024-12-09T05:18:22.125Z] =================================================================================================================== 01:23:39.669 [2024-12-09T05:18:22.125Z] Total : 5209.63 20.35 0.00 0.00 24530.40 5437.48 18888.10 01:23:39.669 { 01:23:39.669 "results": [ 01:23:39.669 { 01:23:39.669 "job": "TLSTESTn1", 01:23:39.669 "core_mask": "0x4", 01:23:39.669 "workload": "verify", 01:23:39.669 "status": "finished", 01:23:39.669 "verify_range": { 01:23:39.669 "start": 0, 01:23:39.669 "length": 8192 01:23:39.669 }, 01:23:39.669 "queue_depth": 128, 01:23:39.669 "io_size": 4096, 01:23:39.669 "runtime": 10.015107, 01:23:39.669 "iops": 5209.629812242645, 01:23:39.669 "mibps": 20.350116454072833, 01:23:39.669 "io_failed": 0, 01:23:39.669 "io_timeout": 0, 01:23:39.669 "avg_latency_us": 24530.40030123681, 01:23:39.669 "min_latency_us": 5437.484716157205, 01:23:39.669 "max_latency_us": 18888.10480349345 01:23:39.669 } 01:23:39.669 ], 01:23:39.669 "core_count": 1 01:23:39.669 } 01:23:39.669 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:23:39.669 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 71950 01:23:39.669 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71950 ']' 01:23:39.669 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71950 01:23:39.669 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 01:23:39.669 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:23:39.669 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71950 01:23:39.669 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 01:23:39.670 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 01:23:39.670 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71950' 01:23:39.670 killing process with pid 71950 01:23:39.670 Received shutdown signal, test time was about 10.000000 seconds 01:23:39.670 01:23:39.670 Latency(us) 01:23:39.670 [2024-12-09T05:18:22.126Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:23:39.670 [2024-12-09T05:18:22.126Z] =================================================================================================================== 01:23:39.670 [2024-12-09T05:18:22.126Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:23:39.670 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71950 01:23:39.670 05:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71950 01:23:39.670 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 71918 01:23:39.670 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71918 ']' 01:23:39.670 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71918 01:23:39.670 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 01:23:39.670 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:23:39.670 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71918 01:23:39.670 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:23:39.670 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:23:39.670 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71918' 01:23:39.670 killing process with pid 71918 01:23:39.670 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71918 01:23:39.670 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71918 01:23:40.238 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 01:23:40.238 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:23:40.238 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 01:23:40.238 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:23:40.238 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72089 01:23:40.238 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 01:23:40.238 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72089 01:23:40.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:23:40.238 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72089 ']' 01:23:40.238 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:23:40.238 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 01:23:40.238 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:23:40.238 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 01:23:40.238 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:23:40.238 [2024-12-09 05:18:22.474051] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:23:40.238 [2024-12-09 05:18:22.474122] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:23:40.238 [2024-12-09 05:18:22.623971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:23:40.238 [2024-12-09 05:18:22.675485] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:23:40.238 [2024-12-09 05:18:22.675538] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:23:40.238 [2024-12-09 05:18:22.675543] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:23:40.238 [2024-12-09 05:18:22.675549] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:23:40.238 [2024-12-09 05:18:22.675552] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:23:40.238 [2024-12-09 05:18:22.675822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:23:40.497 [2024-12-09 05:18:22.716447] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:23:41.064 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:23:41.064 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 01:23:41.064 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:23:41.064 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 01:23:41.064 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:23:41.064 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:23:41.064 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.rund1qqybI 01:23:41.064 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.rund1qqybI 01:23:41.064 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 01:23:41.323 [2024-12-09 05:18:23.569426] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:23:41.323 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 01:23:41.582 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 01:23:41.582 [2024-12-09 05:18:23.956746] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:23:41.582 [2024-12-09 05:18:23.956984] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:23:41.582 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 01:23:41.841 malloc0 01:23:41.841 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 01:23:42.100 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.rund1qqybI 01:23:42.358 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 01:23:42.358 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=72139 01:23:42.358 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 01:23:42.358 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:23:42.358 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 72139 /var/tmp/bdevperf.sock 01:23:42.358 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72139 ']' 01:23:42.358 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:23:42.358 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 01:23:42.358 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:23:42.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:23:42.358 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 01:23:42.358 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:23:42.616 [2024-12-09 05:18:24.818935] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:23:42.616 [2024-12-09 05:18:24.819006] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72139 ] 01:23:42.616 [2024-12-09 05:18:24.951209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:23:42.616 [2024-12-09 05:18:25.013923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:23:42.616 [2024-12-09 05:18:25.054753] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:23:43.565 05:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:23:43.565 05:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 01:23:43.565 05:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.rund1qqybI 01:23:43.565 05:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 01:23:43.823 [2024-12-09 05:18:26.180035] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:23:43.823 nvme0n1 01:23:43.823 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:23:44.082 Running I/O for 1 seconds... 01:23:45.019 6408.00 IOPS, 25.03 MiB/s 01:23:45.019 Latency(us) 01:23:45.019 [2024-12-09T05:18:27.475Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:23:45.019 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 01:23:45.019 Verification LBA range: start 0x0 length 0x2000 01:23:45.019 nvme0n1 : 1.01 6464.89 25.25 0.00 0.00 19671.75 4063.80 15339.43 01:23:45.019 [2024-12-09T05:18:27.475Z] =================================================================================================================== 01:23:45.019 [2024-12-09T05:18:27.475Z] Total : 6464.89 25.25 0.00 0.00 19671.75 4063.80 15339.43 01:23:45.019 { 01:23:45.019 "results": [ 01:23:45.019 { 01:23:45.019 "job": "nvme0n1", 01:23:45.019 "core_mask": "0x2", 01:23:45.019 "workload": "verify", 01:23:45.019 "status": "finished", 01:23:45.019 "verify_range": { 01:23:45.019 "start": 0, 01:23:45.019 "length": 8192 01:23:45.019 }, 01:23:45.019 "queue_depth": 128, 01:23:45.019 "io_size": 4096, 01:23:45.019 "runtime": 1.010999, 01:23:45.019 "iops": 6464.8926457889675, 01:23:45.019 "mibps": 25.253486897613154, 01:23:45.019 "io_failed": 0, 01:23:45.019 "io_timeout": 0, 01:23:45.019 "avg_latency_us": 19671.74642771242, 01:23:45.019 "min_latency_us": 4063.8043668122273, 01:23:45.019 "max_latency_us": 15339.43056768559 01:23:45.019 } 01:23:45.019 ], 01:23:45.019 "core_count": 1 01:23:45.019 } 01:23:45.019 05:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 72139 01:23:45.019 05:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72139 ']' 01:23:45.019 05:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72139 01:23:45.019 05:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 01:23:45.019 05:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:23:45.019 05:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72139 01:23:45.019 05:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:23:45.019 05:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:23:45.019 05:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72139' 01:23:45.019 killing process with pid 72139 01:23:45.019 05:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72139 01:23:45.019 Received shutdown signal, test time was about 1.000000 seconds 01:23:45.019 01:23:45.019 Latency(us) 01:23:45.019 [2024-12-09T05:18:27.475Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:23:45.019 [2024-12-09T05:18:27.475Z] =================================================================================================================== 01:23:45.019 [2024-12-09T05:18:27.475Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:23:45.019 05:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72139 01:23:45.277 05:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 72089 01:23:45.277 05:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72089 ']' 01:23:45.277 05:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72089 01:23:45.277 05:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 01:23:45.277 05:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:23:45.277 05:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72089 01:23:45.277 killing process with pid 72089 01:23:45.277 05:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:23:45.277 05:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:23:45.277 05:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72089' 01:23:45.277 05:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72089 01:23:45.277 05:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72089 01:23:45.538 05:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 01:23:45.538 05:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:23:45.538 05:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 01:23:45.538 05:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:23:45.538 05:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72190 01:23:45.538 05:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 01:23:45.538 05:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72190 01:23:45.538 05:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72190 ']' 01:23:45.538 05:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:23:45.538 05:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 01:23:45.538 05:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:23:45.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:23:45.538 05:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 01:23:45.538 05:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:23:45.538 [2024-12-09 05:18:27.959539] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:23:45.538 [2024-12-09 05:18:27.959605] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:23:45.796 [2024-12-09 05:18:28.110228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:23:45.796 [2024-12-09 05:18:28.157991] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:23:45.796 [2024-12-09 05:18:28.158135] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:23:45.796 [2024-12-09 05:18:28.158172] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:23:45.796 [2024-12-09 05:18:28.158199] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:23:45.796 [2024-12-09 05:18:28.158215] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:23:45.796 [2024-12-09 05:18:28.158623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:23:45.796 [2024-12-09 05:18:28.199651] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:23:46.731 05:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:23:46.731 05:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 01:23:46.731 05:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:23:46.731 05:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 01:23:46.731 05:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:23:46.731 05:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:23:46.731 05:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 01:23:46.731 05:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:46.731 05:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:23:46.731 [2024-12-09 05:18:28.897058] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:23:46.731 malloc0 01:23:46.731 [2024-12-09 05:18:28.925464] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:23:46.731 [2024-12-09 05:18:28.925655] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:23:46.731 05:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:46.731 05:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=72222 01:23:46.731 05:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 01:23:46.731 05:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 72222 /var/tmp/bdevperf.sock 01:23:46.731 05:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72222 ']' 01:23:46.731 05:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:23:46.731 05:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 01:23:46.731 05:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:23:46.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:23:46.731 05:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 01:23:46.731 05:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:23:46.731 [2024-12-09 05:18:29.008383] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:23:46.731 [2024-12-09 05:18:29.008490] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72222 ] 01:23:46.731 [2024-12-09 05:18:29.139766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:23:46.989 [2024-12-09 05:18:29.199916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:23:46.989 [2024-12-09 05:18:29.240780] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:23:47.556 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:23:47.556 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 01:23:47.556 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.rund1qqybI 01:23:47.814 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 01:23:48.073 [2024-12-09 05:18:30.280833] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:23:48.073 nvme0n1 01:23:48.073 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:23:48.073 Running I/O for 1 seconds... 01:23:49.449 6343.00 IOPS, 24.78 MiB/s 01:23:49.449 Latency(us) 01:23:49.449 [2024-12-09T05:18:31.905Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:23:49.449 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 01:23:49.449 Verification LBA range: start 0x0 length 0x2000 01:23:49.449 nvme0n1 : 1.01 6399.05 25.00 0.00 0.00 19869.82 4063.80 15453.90 01:23:49.449 [2024-12-09T05:18:31.905Z] =================================================================================================================== 01:23:49.449 [2024-12-09T05:18:31.905Z] Total : 6399.05 25.00 0.00 0.00 19869.82 4063.80 15453.90 01:23:49.449 { 01:23:49.449 "results": [ 01:23:49.449 { 01:23:49.449 "job": "nvme0n1", 01:23:49.449 "core_mask": "0x2", 01:23:49.449 "workload": "verify", 01:23:49.449 "status": "finished", 01:23:49.449 "verify_range": { 01:23:49.449 "start": 0, 01:23:49.449 "length": 8192 01:23:49.449 }, 01:23:49.450 "queue_depth": 128, 01:23:49.450 "io_size": 4096, 01:23:49.450 "runtime": 1.011244, 01:23:49.450 "iops": 6399.049092009446, 01:23:49.450 "mibps": 24.9962855156619, 01:23:49.450 "io_failed": 0, 01:23:49.450 "io_timeout": 0, 01:23:49.450 "avg_latency_us": 19869.818700969525, 01:23:49.450 "min_latency_us": 4063.8043668122273, 01:23:49.450 "max_latency_us": 15453.903930131004 01:23:49.450 } 01:23:49.450 ], 01:23:49.450 "core_count": 1 01:23:49.450 } 01:23:49.450 05:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 01:23:49.450 05:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 01:23:49.450 05:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:23:49.450 05:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:23:49.450 05:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 01:23:49.450 "subsystems": [ 01:23:49.450 { 01:23:49.450 "subsystem": "keyring", 01:23:49.450 "config": [ 01:23:49.450 { 01:23:49.450 "method": "keyring_file_add_key", 01:23:49.450 "params": { 01:23:49.450 "name": "key0", 01:23:49.450 "path": "/tmp/tmp.rund1qqybI" 01:23:49.450 } 01:23:49.450 } 01:23:49.450 ] 01:23:49.450 }, 01:23:49.450 { 01:23:49.450 "subsystem": "iobuf", 01:23:49.450 "config": [ 01:23:49.450 { 01:23:49.450 "method": "iobuf_set_options", 01:23:49.450 "params": { 01:23:49.450 "small_pool_count": 8192, 01:23:49.450 "large_pool_count": 1024, 01:23:49.450 "small_bufsize": 8192, 01:23:49.450 "large_bufsize": 135168, 01:23:49.450 "enable_numa": false 01:23:49.450 } 01:23:49.450 } 01:23:49.450 ] 01:23:49.450 }, 01:23:49.450 { 01:23:49.450 "subsystem": "sock", 01:23:49.450 "config": [ 01:23:49.450 { 01:23:49.450 "method": "sock_set_default_impl", 01:23:49.450 "params": { 01:23:49.450 "impl_name": "uring" 01:23:49.450 } 01:23:49.450 }, 01:23:49.450 { 01:23:49.450 "method": "sock_impl_set_options", 01:23:49.450 "params": { 01:23:49.450 "impl_name": "ssl", 01:23:49.450 "recv_buf_size": 4096, 01:23:49.450 "send_buf_size": 4096, 01:23:49.450 "enable_recv_pipe": true, 01:23:49.450 "enable_quickack": false, 01:23:49.450 "enable_placement_id": 0, 01:23:49.450 "enable_zerocopy_send_server": true, 01:23:49.450 "enable_zerocopy_send_client": false, 01:23:49.450 "zerocopy_threshold": 0, 01:23:49.450 "tls_version": 0, 01:23:49.450 "enable_ktls": false 01:23:49.450 } 01:23:49.450 }, 01:23:49.450 { 01:23:49.450 "method": "sock_impl_set_options", 01:23:49.450 "params": { 01:23:49.450 "impl_name": "posix", 01:23:49.450 "recv_buf_size": 2097152, 01:23:49.450 "send_buf_size": 2097152, 01:23:49.450 "enable_recv_pipe": true, 01:23:49.450 "enable_quickack": false, 01:23:49.450 "enable_placement_id": 0, 01:23:49.450 "enable_zerocopy_send_server": true, 01:23:49.450 "enable_zerocopy_send_client": false, 01:23:49.450 "zerocopy_threshold": 0, 01:23:49.450 "tls_version": 0, 01:23:49.450 "enable_ktls": false 01:23:49.450 } 01:23:49.450 }, 01:23:49.450 { 01:23:49.450 "method": "sock_impl_set_options", 01:23:49.450 "params": { 01:23:49.450 "impl_name": "uring", 01:23:49.450 "recv_buf_size": 2097152, 01:23:49.450 "send_buf_size": 2097152, 01:23:49.450 "enable_recv_pipe": true, 01:23:49.450 "enable_quickack": false, 01:23:49.450 "enable_placement_id": 0, 01:23:49.450 "enable_zerocopy_send_server": false, 01:23:49.450 "enable_zerocopy_send_client": false, 01:23:49.450 "zerocopy_threshold": 0, 01:23:49.450 "tls_version": 0, 01:23:49.450 "enable_ktls": false 01:23:49.450 } 01:23:49.450 } 01:23:49.450 ] 01:23:49.450 }, 01:23:49.450 { 01:23:49.450 "subsystem": "vmd", 01:23:49.450 "config": [] 01:23:49.450 }, 01:23:49.450 { 01:23:49.450 "subsystem": "accel", 01:23:49.450 "config": [ 01:23:49.450 { 01:23:49.450 "method": "accel_set_options", 01:23:49.450 "params": { 01:23:49.450 "small_cache_size": 128, 01:23:49.450 "large_cache_size": 16, 01:23:49.450 "task_count": 2048, 01:23:49.450 "sequence_count": 2048, 01:23:49.450 "buf_count": 2048 01:23:49.450 } 01:23:49.450 } 01:23:49.450 ] 01:23:49.450 }, 01:23:49.450 { 01:23:49.450 "subsystem": "bdev", 01:23:49.450 "config": [ 01:23:49.450 { 01:23:49.450 "method": "bdev_set_options", 01:23:49.450 "params": { 01:23:49.450 "bdev_io_pool_size": 65535, 01:23:49.450 "bdev_io_cache_size": 256, 01:23:49.450 "bdev_auto_examine": true, 01:23:49.450 "iobuf_small_cache_size": 128, 01:23:49.450 "iobuf_large_cache_size": 16 01:23:49.450 } 01:23:49.450 }, 01:23:49.450 { 01:23:49.450 "method": "bdev_raid_set_options", 01:23:49.450 "params": { 01:23:49.450 "process_window_size_kb": 1024, 01:23:49.450 "process_max_bandwidth_mb_sec": 0 01:23:49.450 } 01:23:49.450 }, 01:23:49.450 { 01:23:49.450 "method": "bdev_iscsi_set_options", 01:23:49.450 "params": { 01:23:49.450 "timeout_sec": 30 01:23:49.450 } 01:23:49.450 }, 01:23:49.450 { 01:23:49.450 "method": "bdev_nvme_set_options", 01:23:49.450 "params": { 01:23:49.450 "action_on_timeout": "none", 01:23:49.450 "timeout_us": 0, 01:23:49.450 "timeout_admin_us": 0, 01:23:49.450 "keep_alive_timeout_ms": 10000, 01:23:49.450 "arbitration_burst": 0, 01:23:49.450 "low_priority_weight": 0, 01:23:49.450 "medium_priority_weight": 0, 01:23:49.450 "high_priority_weight": 0, 01:23:49.450 "nvme_adminq_poll_period_us": 10000, 01:23:49.450 "nvme_ioq_poll_period_us": 0, 01:23:49.450 "io_queue_requests": 0, 01:23:49.450 "delay_cmd_submit": true, 01:23:49.450 "transport_retry_count": 4, 01:23:49.450 "bdev_retry_count": 3, 01:23:49.450 "transport_ack_timeout": 0, 01:23:49.450 "ctrlr_loss_timeout_sec": 0, 01:23:49.450 "reconnect_delay_sec": 0, 01:23:49.450 "fast_io_fail_timeout_sec": 0, 01:23:49.450 "disable_auto_failback": false, 01:23:49.450 "generate_uuids": false, 01:23:49.450 "transport_tos": 0, 01:23:49.450 "nvme_error_stat": false, 01:23:49.450 "rdma_srq_size": 0, 01:23:49.450 "io_path_stat": false, 01:23:49.450 "allow_accel_sequence": false, 01:23:49.450 "rdma_max_cq_size": 0, 01:23:49.450 "rdma_cm_event_timeout_ms": 0, 01:23:49.450 "dhchap_digests": [ 01:23:49.450 "sha256", 01:23:49.450 "sha384", 01:23:49.450 "sha512" 01:23:49.450 ], 01:23:49.450 "dhchap_dhgroups": [ 01:23:49.450 "null", 01:23:49.450 "ffdhe2048", 01:23:49.450 "ffdhe3072", 01:23:49.450 "ffdhe4096", 01:23:49.450 "ffdhe6144", 01:23:49.450 "ffdhe8192" 01:23:49.450 ] 01:23:49.450 } 01:23:49.450 }, 01:23:49.450 { 01:23:49.450 "method": "bdev_nvme_set_hotplug", 01:23:49.450 "params": { 01:23:49.450 "period_us": 100000, 01:23:49.450 "enable": false 01:23:49.450 } 01:23:49.450 }, 01:23:49.450 { 01:23:49.450 "method": "bdev_malloc_create", 01:23:49.450 "params": { 01:23:49.450 "name": "malloc0", 01:23:49.450 "num_blocks": 8192, 01:23:49.450 "block_size": 4096, 01:23:49.450 "physical_block_size": 4096, 01:23:49.450 "uuid": "ec5c0263-e9fd-4490-80dd-28d0b70fee7f", 01:23:49.450 "optimal_io_boundary": 0, 01:23:49.450 "md_size": 0, 01:23:49.450 "dif_type": 0, 01:23:49.450 "dif_is_head_of_md": false, 01:23:49.450 "dif_pi_format": 0 01:23:49.450 } 01:23:49.450 }, 01:23:49.450 { 01:23:49.450 "method": "bdev_wait_for_examine" 01:23:49.450 } 01:23:49.450 ] 01:23:49.450 }, 01:23:49.450 { 01:23:49.450 "subsystem": "nbd", 01:23:49.450 "config": [] 01:23:49.450 }, 01:23:49.450 { 01:23:49.450 "subsystem": "scheduler", 01:23:49.450 "config": [ 01:23:49.450 { 01:23:49.450 "method": "framework_set_scheduler", 01:23:49.450 "params": { 01:23:49.450 "name": "static" 01:23:49.450 } 01:23:49.450 } 01:23:49.450 ] 01:23:49.450 }, 01:23:49.450 { 01:23:49.450 "subsystem": "nvmf", 01:23:49.450 "config": [ 01:23:49.450 { 01:23:49.450 "method": "nvmf_set_config", 01:23:49.450 "params": { 01:23:49.450 "discovery_filter": "match_any", 01:23:49.450 "admin_cmd_passthru": { 01:23:49.450 "identify_ctrlr": false 01:23:49.450 }, 01:23:49.450 "dhchap_digests": [ 01:23:49.450 "sha256", 01:23:49.450 "sha384", 01:23:49.450 "sha512" 01:23:49.450 ], 01:23:49.450 "dhchap_dhgroups": [ 01:23:49.450 "null", 01:23:49.450 "ffdhe2048", 01:23:49.450 "ffdhe3072", 01:23:49.450 "ffdhe4096", 01:23:49.450 "ffdhe6144", 01:23:49.450 "ffdhe8192" 01:23:49.450 ] 01:23:49.450 } 01:23:49.450 }, 01:23:49.450 { 01:23:49.450 "method": "nvmf_set_max_subsystems", 01:23:49.451 "params": { 01:23:49.451 "max_subsystems": 1024 01:23:49.451 } 01:23:49.451 }, 01:23:49.451 { 01:23:49.451 "method": "nvmf_set_crdt", 01:23:49.451 "params": { 01:23:49.451 "crdt1": 0, 01:23:49.451 "crdt2": 0, 01:23:49.451 "crdt3": 0 01:23:49.451 } 01:23:49.451 }, 01:23:49.451 { 01:23:49.451 "method": "nvmf_create_transport", 01:23:49.451 "params": { 01:23:49.451 "trtype": "TCP", 01:23:49.451 "max_queue_depth": 128, 01:23:49.451 "max_io_qpairs_per_ctrlr": 127, 01:23:49.451 "in_capsule_data_size": 4096, 01:23:49.451 "max_io_size": 131072, 01:23:49.451 "io_unit_size": 131072, 01:23:49.451 "max_aq_depth": 128, 01:23:49.451 "num_shared_buffers": 511, 01:23:49.451 "buf_cache_size": 4294967295, 01:23:49.451 "dif_insert_or_strip": false, 01:23:49.451 "zcopy": false, 01:23:49.451 "c2h_success": false, 01:23:49.451 "sock_priority": 0, 01:23:49.451 "abort_timeout_sec": 1, 01:23:49.451 "ack_timeout": 0, 01:23:49.451 "data_wr_pool_size": 0 01:23:49.451 } 01:23:49.451 }, 01:23:49.451 { 01:23:49.451 "method": "nvmf_create_subsystem", 01:23:49.451 "params": { 01:23:49.451 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:23:49.451 "allow_any_host": false, 01:23:49.451 "serial_number": "00000000000000000000", 01:23:49.451 "model_number": "SPDK bdev Controller", 01:23:49.451 "max_namespaces": 32, 01:23:49.451 "min_cntlid": 1, 01:23:49.451 "max_cntlid": 65519, 01:23:49.451 "ana_reporting": false 01:23:49.451 } 01:23:49.451 }, 01:23:49.451 { 01:23:49.451 "method": "nvmf_subsystem_add_host", 01:23:49.451 "params": { 01:23:49.451 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:23:49.451 "host": "nqn.2016-06.io.spdk:host1", 01:23:49.451 "psk": "key0" 01:23:49.451 } 01:23:49.451 }, 01:23:49.451 { 01:23:49.451 "method": "nvmf_subsystem_add_ns", 01:23:49.451 "params": { 01:23:49.451 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:23:49.451 "namespace": { 01:23:49.451 "nsid": 1, 01:23:49.451 "bdev_name": "malloc0", 01:23:49.451 "nguid": "EC5C0263E9FD449080DD28D0B70FEE7F", 01:23:49.451 "uuid": "ec5c0263-e9fd-4490-80dd-28d0b70fee7f", 01:23:49.451 "no_auto_visible": false 01:23:49.451 } 01:23:49.451 } 01:23:49.451 }, 01:23:49.451 { 01:23:49.451 "method": "nvmf_subsystem_add_listener", 01:23:49.451 "params": { 01:23:49.451 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:23:49.451 "listen_address": { 01:23:49.451 "trtype": "TCP", 01:23:49.451 "adrfam": "IPv4", 01:23:49.451 "traddr": "10.0.0.3", 01:23:49.451 "trsvcid": "4420" 01:23:49.451 }, 01:23:49.451 "secure_channel": false, 01:23:49.451 "sock_impl": "ssl" 01:23:49.451 } 01:23:49.451 } 01:23:49.451 ] 01:23:49.451 } 01:23:49.451 ] 01:23:49.451 }' 01:23:49.451 05:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 01:23:49.710 05:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 01:23:49.710 "subsystems": [ 01:23:49.710 { 01:23:49.710 "subsystem": "keyring", 01:23:49.710 "config": [ 01:23:49.710 { 01:23:49.710 "method": "keyring_file_add_key", 01:23:49.710 "params": { 01:23:49.710 "name": "key0", 01:23:49.710 "path": "/tmp/tmp.rund1qqybI" 01:23:49.710 } 01:23:49.710 } 01:23:49.710 ] 01:23:49.710 }, 01:23:49.710 { 01:23:49.710 "subsystem": "iobuf", 01:23:49.710 "config": [ 01:23:49.710 { 01:23:49.710 "method": "iobuf_set_options", 01:23:49.710 "params": { 01:23:49.710 "small_pool_count": 8192, 01:23:49.711 "large_pool_count": 1024, 01:23:49.711 "small_bufsize": 8192, 01:23:49.711 "large_bufsize": 135168, 01:23:49.711 "enable_numa": false 01:23:49.711 } 01:23:49.711 } 01:23:49.711 ] 01:23:49.711 }, 01:23:49.711 { 01:23:49.711 "subsystem": "sock", 01:23:49.711 "config": [ 01:23:49.711 { 01:23:49.711 "method": "sock_set_default_impl", 01:23:49.711 "params": { 01:23:49.711 "impl_name": "uring" 01:23:49.711 } 01:23:49.711 }, 01:23:49.711 { 01:23:49.711 "method": "sock_impl_set_options", 01:23:49.711 "params": { 01:23:49.711 "impl_name": "ssl", 01:23:49.711 "recv_buf_size": 4096, 01:23:49.711 "send_buf_size": 4096, 01:23:49.711 "enable_recv_pipe": true, 01:23:49.711 "enable_quickack": false, 01:23:49.711 "enable_placement_id": 0, 01:23:49.711 "enable_zerocopy_send_server": true, 01:23:49.711 "enable_zerocopy_send_client": false, 01:23:49.711 "zerocopy_threshold": 0, 01:23:49.711 "tls_version": 0, 01:23:49.711 "enable_ktls": false 01:23:49.711 } 01:23:49.711 }, 01:23:49.711 { 01:23:49.711 "method": "sock_impl_set_options", 01:23:49.711 "params": { 01:23:49.711 "impl_name": "posix", 01:23:49.711 "recv_buf_size": 2097152, 01:23:49.711 "send_buf_size": 2097152, 01:23:49.711 "enable_recv_pipe": true, 01:23:49.711 "enable_quickack": false, 01:23:49.711 "enable_placement_id": 0, 01:23:49.711 "enable_zerocopy_send_server": true, 01:23:49.711 "enable_zerocopy_send_client": false, 01:23:49.711 "zerocopy_threshold": 0, 01:23:49.711 "tls_version": 0, 01:23:49.711 "enable_ktls": false 01:23:49.711 } 01:23:49.711 }, 01:23:49.711 { 01:23:49.711 "method": "sock_impl_set_options", 01:23:49.711 "params": { 01:23:49.711 "impl_name": "uring", 01:23:49.711 "recv_buf_size": 2097152, 01:23:49.711 "send_buf_size": 2097152, 01:23:49.711 "enable_recv_pipe": true, 01:23:49.711 "enable_quickack": false, 01:23:49.711 "enable_placement_id": 0, 01:23:49.711 "enable_zerocopy_send_server": false, 01:23:49.711 "enable_zerocopy_send_client": false, 01:23:49.711 "zerocopy_threshold": 0, 01:23:49.711 "tls_version": 0, 01:23:49.711 "enable_ktls": false 01:23:49.711 } 01:23:49.711 } 01:23:49.711 ] 01:23:49.711 }, 01:23:49.711 { 01:23:49.711 "subsystem": "vmd", 01:23:49.711 "config": [] 01:23:49.711 }, 01:23:49.711 { 01:23:49.711 "subsystem": "accel", 01:23:49.711 "config": [ 01:23:49.711 { 01:23:49.711 "method": "accel_set_options", 01:23:49.711 "params": { 01:23:49.711 "small_cache_size": 128, 01:23:49.711 "large_cache_size": 16, 01:23:49.711 "task_count": 2048, 01:23:49.711 "sequence_count": 2048, 01:23:49.711 "buf_count": 2048 01:23:49.711 } 01:23:49.711 } 01:23:49.711 ] 01:23:49.711 }, 01:23:49.711 { 01:23:49.711 "subsystem": "bdev", 01:23:49.711 "config": [ 01:23:49.711 { 01:23:49.711 "method": "bdev_set_options", 01:23:49.711 "params": { 01:23:49.711 "bdev_io_pool_size": 65535, 01:23:49.711 "bdev_io_cache_size": 256, 01:23:49.711 "bdev_auto_examine": true, 01:23:49.711 "iobuf_small_cache_size": 128, 01:23:49.711 "iobuf_large_cache_size": 16 01:23:49.711 } 01:23:49.711 }, 01:23:49.711 { 01:23:49.711 "method": "bdev_raid_set_options", 01:23:49.711 "params": { 01:23:49.711 "process_window_size_kb": 1024, 01:23:49.711 "process_max_bandwidth_mb_sec": 0 01:23:49.711 } 01:23:49.711 }, 01:23:49.711 { 01:23:49.711 "method": "bdev_iscsi_set_options", 01:23:49.711 "params": { 01:23:49.711 "timeout_sec": 30 01:23:49.711 } 01:23:49.711 }, 01:23:49.711 { 01:23:49.711 "method": "bdev_nvme_set_options", 01:23:49.711 "params": { 01:23:49.711 "action_on_timeout": "none", 01:23:49.711 "timeout_us": 0, 01:23:49.711 "timeout_admin_us": 0, 01:23:49.711 "keep_alive_timeout_ms": 10000, 01:23:49.711 "arbitration_burst": 0, 01:23:49.711 "low_priority_weight": 0, 01:23:49.711 "medium_priority_weight": 0, 01:23:49.711 "high_priority_weight": 0, 01:23:49.711 "nvme_adminq_poll_period_us": 10000, 01:23:49.711 "nvme_ioq_poll_period_us": 0, 01:23:49.711 "io_queue_requests": 512, 01:23:49.711 "delay_cmd_submit": true, 01:23:49.711 "transport_retry_count": 4, 01:23:49.711 "bdev_retry_count": 3, 01:23:49.711 "transport_ack_timeout": 0, 01:23:49.711 "ctrlr_loss_timeout_sec": 0, 01:23:49.711 "reconnect_delay_sec": 0, 01:23:49.711 "fast_io_fail_timeout_sec": 0, 01:23:49.711 "disable_auto_failback": false, 01:23:49.711 "generate_uuids": false, 01:23:49.711 "transport_tos": 0, 01:23:49.711 "nvme_error_stat": false, 01:23:49.711 "rdma_srq_size": 0, 01:23:49.711 "io_path_stat": false, 01:23:49.711 "allow_accel_sequence": false, 01:23:49.711 "rdma_max_cq_size": 0, 01:23:49.711 "rdma_cm_event_timeout_ms": 0, 01:23:49.711 "dhchap_digests": [ 01:23:49.711 "sha256", 01:23:49.711 "sha384", 01:23:49.711 "sha512" 01:23:49.711 ], 01:23:49.711 "dhchap_dhgroups": [ 01:23:49.711 "null", 01:23:49.711 "ffdhe2048", 01:23:49.711 "ffdhe3072", 01:23:49.711 "ffdhe4096", 01:23:49.711 "ffdhe6144", 01:23:49.711 "ffdhe8192" 01:23:49.711 ] 01:23:49.711 } 01:23:49.711 }, 01:23:49.711 { 01:23:49.711 "method": "bdev_nvme_attach_controller", 01:23:49.711 "params": { 01:23:49.711 "name": "nvme0", 01:23:49.711 "trtype": "TCP", 01:23:49.711 "adrfam": "IPv4", 01:23:49.711 "traddr": "10.0.0.3", 01:23:49.711 "trsvcid": "4420", 01:23:49.711 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:23:49.711 "prchk_reftag": false, 01:23:49.711 "prchk_guard": false, 01:23:49.711 "ctrlr_loss_timeout_sec": 0, 01:23:49.711 "reconnect_delay_sec": 0, 01:23:49.711 "fast_io_fail_timeout_sec": 0, 01:23:49.711 "psk": "key0", 01:23:49.711 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:23:49.711 "hdgst": false, 01:23:49.711 "ddgst": false, 01:23:49.711 "multipath": "multipath" 01:23:49.711 } 01:23:49.711 }, 01:23:49.711 { 01:23:49.711 "method": "bdev_nvme_set_hotplug", 01:23:49.711 "params": { 01:23:49.711 "period_us": 100000, 01:23:49.711 "enable": false 01:23:49.711 } 01:23:49.711 }, 01:23:49.711 { 01:23:49.711 "method": "bdev_enable_histogram", 01:23:49.711 "params": { 01:23:49.711 "name": "nvme0n1", 01:23:49.711 "enable": true 01:23:49.711 } 01:23:49.711 }, 01:23:49.711 { 01:23:49.711 "method": "bdev_wait_for_examine" 01:23:49.711 } 01:23:49.711 ] 01:23:49.711 }, 01:23:49.711 { 01:23:49.711 "subsystem": "nbd", 01:23:49.711 "config": [] 01:23:49.711 } 01:23:49.711 ] 01:23:49.711 }' 01:23:49.711 05:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 72222 01:23:49.711 05:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72222 ']' 01:23:49.711 05:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72222 01:23:49.711 05:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 01:23:49.711 05:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:23:49.711 05:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72222 01:23:49.711 05:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:23:49.711 05:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:23:49.711 05:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72222' 01:23:49.711 killing process with pid 72222 01:23:49.711 Received shutdown signal, test time was about 1.000000 seconds 01:23:49.711 01:23:49.711 Latency(us) 01:23:49.711 [2024-12-09T05:18:32.167Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:23:49.711 [2024-12-09T05:18:32.167Z] =================================================================================================================== 01:23:49.711 [2024-12-09T05:18:32.167Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:23:49.712 05:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72222 01:23:49.712 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72222 01:23:49.971 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 72190 01:23:49.971 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72190 ']' 01:23:49.971 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72190 01:23:49.971 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 01:23:49.971 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:23:49.971 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72190 01:23:49.971 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:23:49.971 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:23:49.971 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72190' 01:23:49.971 killing process with pid 72190 01:23:49.971 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72190 01:23:49.971 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72190 01:23:50.230 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 01:23:50.230 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:23:50.230 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 01:23:50.230 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 01:23:50.230 "subsystems": [ 01:23:50.230 { 01:23:50.230 "subsystem": "keyring", 01:23:50.230 "config": [ 01:23:50.230 { 01:23:50.230 "method": "keyring_file_add_key", 01:23:50.230 "params": { 01:23:50.230 "name": "key0", 01:23:50.230 "path": "/tmp/tmp.rund1qqybI" 01:23:50.230 } 01:23:50.230 } 01:23:50.230 ] 01:23:50.230 }, 01:23:50.230 { 01:23:50.230 "subsystem": "iobuf", 01:23:50.230 "config": [ 01:23:50.230 { 01:23:50.230 "method": "iobuf_set_options", 01:23:50.230 "params": { 01:23:50.230 "small_pool_count": 8192, 01:23:50.230 "large_pool_count": 1024, 01:23:50.230 "small_bufsize": 8192, 01:23:50.230 "large_bufsize": 135168, 01:23:50.230 "enable_numa": false 01:23:50.230 } 01:23:50.230 } 01:23:50.230 ] 01:23:50.230 }, 01:23:50.230 { 01:23:50.230 "subsystem": "sock", 01:23:50.230 "config": [ 01:23:50.230 { 01:23:50.230 "method": "sock_set_default_impl", 01:23:50.230 "params": { 01:23:50.230 "impl_name": "uring" 01:23:50.230 } 01:23:50.230 }, 01:23:50.230 { 01:23:50.230 "method": "sock_impl_set_options", 01:23:50.230 "params": { 01:23:50.230 "impl_name": "ssl", 01:23:50.230 "recv_buf_size": 4096, 01:23:50.230 "send_buf_size": 4096, 01:23:50.230 "enable_recv_pipe": true, 01:23:50.230 "enable_quickack": false, 01:23:50.230 "enable_placement_id": 0, 01:23:50.230 "enable_zerocopy_send_server": true, 01:23:50.230 "enable_zerocopy_send_client": false, 01:23:50.230 "zerocopy_threshold": 0, 01:23:50.230 "tls_version": 0, 01:23:50.230 "enable_ktls": false 01:23:50.230 } 01:23:50.230 }, 01:23:50.230 { 01:23:50.230 "method": "sock_impl_set_options", 01:23:50.230 "params": { 01:23:50.230 "impl_name": "posix", 01:23:50.230 "recv_buf_size": 2097152, 01:23:50.230 "send_buf_size": 2097152, 01:23:50.230 "enable_recv_pipe": true, 01:23:50.230 "enable_quickack": false, 01:23:50.230 "enable_placement_id": 0, 01:23:50.230 "enable_zerocopy_send_server": true, 01:23:50.230 "enable_zerocopy_send_client": false, 01:23:50.230 "zerocopy_threshold": 0, 01:23:50.230 "tls_version": 0, 01:23:50.230 "enable_ktls": false 01:23:50.230 } 01:23:50.230 }, 01:23:50.230 { 01:23:50.230 "method": "sock_impl_set_options", 01:23:50.230 "params": { 01:23:50.230 "impl_name": "uring", 01:23:50.230 "recv_buf_size": 2097152, 01:23:50.230 "send_buf_size": 2097152, 01:23:50.230 "enable_recv_pipe": true, 01:23:50.230 "enable_quickack": false, 01:23:50.230 "enable_placement_id": 0, 01:23:50.230 "enable_zerocopy_send_server": false, 01:23:50.230 "enable_zerocopy_send_client": false, 01:23:50.230 "zerocopy_threshold": 0, 01:23:50.230 "tls_version": 0, 01:23:50.230 "enable_ktls": false 01:23:50.230 } 01:23:50.230 } 01:23:50.230 ] 01:23:50.230 }, 01:23:50.230 { 01:23:50.230 "subsystem": "vmd", 01:23:50.230 "config": [] 01:23:50.230 }, 01:23:50.230 { 01:23:50.230 "subsystem": "accel", 01:23:50.230 "config": [ 01:23:50.230 { 01:23:50.230 "method": "accel_set_options", 01:23:50.230 "params": { 01:23:50.230 "small_cache_size": 128, 01:23:50.230 "large_cache_size": 16, 01:23:50.230 "task_count": 2048, 01:23:50.230 "sequence_count": 2048, 01:23:50.230 "buf_count": 2048 01:23:50.230 } 01:23:50.230 } 01:23:50.230 ] 01:23:50.230 }, 01:23:50.230 { 01:23:50.230 "subsystem": "bdev", 01:23:50.230 "config": [ 01:23:50.230 { 01:23:50.230 "method": "bdev_set_options", 01:23:50.230 "params": { 01:23:50.230 "bdev_io_pool_size": 65535, 01:23:50.230 "bdev_io_cache_size": 256, 01:23:50.230 "bdev_auto_examine": true, 01:23:50.230 "iobuf_small_cache_size": 128, 01:23:50.230 "iobuf_large_cache_size": 16 01:23:50.230 } 01:23:50.230 }, 01:23:50.230 { 01:23:50.230 "method": "bdev_raid_set_options", 01:23:50.230 "params": { 01:23:50.230 "process_window_size_kb": 1024, 01:23:50.230 "process_max_bandwidth_mb_sec": 0 01:23:50.230 } 01:23:50.230 }, 01:23:50.230 { 01:23:50.230 "method": "bdev_iscsi_set_options", 01:23:50.230 "params": { 01:23:50.230 "timeout_sec": 30 01:23:50.230 } 01:23:50.230 }, 01:23:50.230 { 01:23:50.230 "method": "bdev_nvme_set_options", 01:23:50.230 "params": { 01:23:50.230 "action_on_timeout": "none", 01:23:50.230 "timeout_us": 0, 01:23:50.230 "timeout_admin_us": 0, 01:23:50.230 "keep_alive_timeout_ms": 10000, 01:23:50.230 "arbitration_burst": 0, 01:23:50.230 "low_priority_weight": 0, 01:23:50.230 "medium_priority_weight": 0, 01:23:50.230 "high_priority_weight": 0, 01:23:50.230 "nvme_adminq_poll_period_us": 10000, 01:23:50.230 "nvme_ioq_poll_period_us": 0, 01:23:50.230 "io_queue_requests": 0, 01:23:50.230 "delay_cmd_submit": true, 01:23:50.230 "transport_retry_count": 4, 01:23:50.230 "bdev_retry_count": 3, 01:23:50.230 "transport_ack_timeout": 0, 01:23:50.230 "ctrlr_loss_timeout_sec": 0, 01:23:50.230 "reconnect_delay_sec": 0, 01:23:50.230 "fast_io_fail_timeout_sec": 0, 01:23:50.230 "disable_auto_failback": false, 01:23:50.230 "generate_uuids": false, 01:23:50.230 "transport_tos": 0, 01:23:50.230 "nvme_error_stat": false, 01:23:50.230 "rdma_srq_size": 0, 01:23:50.230 "io_path_stat": false, 01:23:50.230 "allow_accel_sequence": false, 01:23:50.230 "rdma_max_cq_size": 0, 01:23:50.230 "rdma_cm_event_timeout_ms": 0, 01:23:50.230 "dhchap_digests": [ 01:23:50.230 "sha256", 01:23:50.230 "sha384", 01:23:50.230 "sha512" 01:23:50.230 ], 01:23:50.230 "dhchap_dhgroups": [ 01:23:50.230 "null", 01:23:50.230 "ffdhe2048", 01:23:50.230 "ffdhe3072", 01:23:50.230 "ffdhe4096", 01:23:50.230 "ffdhe6144", 01:23:50.230 "ffdhe8192" 01:23:50.230 ] 01:23:50.230 } 01:23:50.230 }, 01:23:50.230 { 01:23:50.230 "method": "bdev_nvme_set_hotplug", 01:23:50.230 "params": { 01:23:50.230 "period_us": 100000, 01:23:50.230 "enable": false 01:23:50.230 } 01:23:50.230 }, 01:23:50.230 { 01:23:50.230 "method": "bdev_malloc_create", 01:23:50.230 "params": { 01:23:50.230 "name": "malloc0", 01:23:50.230 "num_blocks": 8192, 01:23:50.230 "block_size": 4096, 01:23:50.230 "physical_block_size": 4096, 01:23:50.230 "uuid": "ec5c0263-e9fd-4490-80dd-28d0b70fee7f", 01:23:50.230 "optimal_io_boundary": 0, 01:23:50.230 "md_size": 0, 01:23:50.230 "dif_type": 0, 01:23:50.230 "dif_is_head_of_md": false, 01:23:50.230 "dif_pi_format": 0 01:23:50.230 } 01:23:50.230 }, 01:23:50.230 { 01:23:50.230 "method": "bdev_wait_for_examine" 01:23:50.230 } 01:23:50.230 ] 01:23:50.230 }, 01:23:50.230 { 01:23:50.230 "subsystem": "nbd", 01:23:50.230 "config": [] 01:23:50.230 }, 01:23:50.230 { 01:23:50.230 "subsystem": "scheduler", 01:23:50.230 "config": [ 01:23:50.230 { 01:23:50.230 "method": "framework_set_scheduler", 01:23:50.230 "params": { 01:23:50.230 "name": "static" 01:23:50.230 } 01:23:50.230 } 01:23:50.230 ] 01:23:50.230 }, 01:23:50.230 { 01:23:50.230 "subsystem": "nvmf", 01:23:50.230 "config": [ 01:23:50.230 { 01:23:50.230 "method": "nvmf_set_config", 01:23:50.230 "params": { 01:23:50.230 "discovery_filter": "match_any", 01:23:50.230 "admin_cmd_passthru": { 01:23:50.230 "identify_ctrlr": false 01:23:50.230 }, 01:23:50.230 "dhchap_digests": [ 01:23:50.230 "sha256", 01:23:50.230 "sha384", 01:23:50.230 "sha512" 01:23:50.230 ], 01:23:50.230 "dhchap_dhgroups": [ 01:23:50.230 "null", 01:23:50.230 "ffdhe2048", 01:23:50.230 "ffdhe3072", 01:23:50.230 "ffdhe4096", 01:23:50.230 "ffdhe6144", 01:23:50.230 "ffdhe8192" 01:23:50.230 ] 01:23:50.230 } 01:23:50.230 }, 01:23:50.230 { 01:23:50.230 "method": "nvmf_set_max_subsystems", 01:23:50.230 "params": { 01:23:50.230 "max_subsystems": 1024 01:23:50.230 } 01:23:50.230 }, 01:23:50.230 { 01:23:50.231 "method": "nvmf_set_crdt", 01:23:50.231 "params": { 01:23:50.231 "crdt1": 0, 01:23:50.231 "crdt2": 0, 01:23:50.231 "crdt3": 0 01:23:50.231 } 01:23:50.231 }, 01:23:50.231 { 01:23:50.231 "method": "nvmf_create_transport", 01:23:50.231 "params": { 01:23:50.231 "trtype": "TCP", 01:23:50.231 "max_queue_depth": 128, 01:23:50.231 "max_io_qpairs_per_ctrlr": 127, 01:23:50.231 "in_capsule_data_size": 4096, 01:23:50.231 "max_io_size": 131072, 01:23:50.231 "io_unit_size": 131072, 01:23:50.231 "max_aq_depth": 128, 01:23:50.231 "num_shared_buffers": 511, 01:23:50.231 "buf_cache_size": 4294967295, 01:23:50.231 "dif_insert_or_strip": false, 01:23:50.231 "zcopy": false, 01:23:50.231 "c2h_success": false, 01:23:50.231 "sock_priority": 0, 01:23:50.231 "abort_timeout_sec": 1, 01:23:50.231 "ack_timeout": 0, 01:23:50.231 "data_wr_pool_size": 0 01:23:50.231 } 01:23:50.231 }, 01:23:50.231 { 01:23:50.231 "method": "nvmf_create_subsystem", 01:23:50.231 "params": { 01:23:50.231 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:23:50.231 "allow_any_host": false, 01:23:50.231 "serial_number": "00000000000000000000", 01:23:50.231 "model_number": "SPDK bdev Controller", 01:23:50.231 "max_namespaces": 32, 01:23:50.231 "min_cntlid": 1, 01:23:50.231 "max_cntlid": 65519, 01:23:50.231 "ana_reporting": false 01:23:50.231 } 01:23:50.231 }, 01:23:50.231 { 01:23:50.231 "method": "nvmf_subsystem_add_host", 01:23:50.231 "params": { 01:23:50.231 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:23:50.231 "host": "nqn.2016-06.io.spdk:host1", 01:23:50.231 "psk": "key0" 01:23:50.231 } 01:23:50.231 }, 01:23:50.231 { 01:23:50.231 "method": "nvmf_subsystem_add_ns", 01:23:50.231 "params": { 01:23:50.231 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:23:50.231 "namespace": { 01:23:50.231 "nsid": 1, 01:23:50.231 "bdev_name": "malloc0", 01:23:50.231 "nguid": "EC5C0263E9FD449080DD28D0B70FEE7F", 01:23:50.231 "uuid": "ec5c0263-e9fd-4490-80dd-28d0b70fee7f", 01:23:50.231 "no_auto_visible": false 01:23:50.231 } 01:23:50.231 } 01:23:50.231 }, 01:23:50.231 { 01:23:50.231 "method": "nvmf_subsystem_add_listener", 01:23:50.231 "params": { 01:23:50.231 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:23:50.231 "listen_address": { 01:23:50.231 "trtype": "TCP", 01:23:50.231 "adrfam": "IPv4", 01:23:50.231 "traddr": "10.0.0.3", 01:23:50.231 "trsvcid": "4420" 01:23:50.231 }, 01:23:50.231 "secure_channel": false, 01:23:50.231 "sock_impl": "ssl" 01:23:50.231 } 01:23:50.231 } 01:23:50.231 ] 01:23:50.231 } 01:23:50.231 ] 01:23:50.231 }' 01:23:50.231 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:23:50.231 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72278 01:23:50.231 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 01:23:50.231 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72278 01:23:50.231 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72278 ']' 01:23:50.231 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:23:50.231 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 01:23:50.231 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:23:50.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:23:50.231 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 01:23:50.231 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:23:50.231 [2024-12-09 05:18:32.510423] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:23:50.231 [2024-12-09 05:18:32.510559] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:23:50.231 [2024-12-09 05:18:32.664509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:23:50.490 [2024-12-09 05:18:32.707916] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:23:50.490 [2024-12-09 05:18:32.708063] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:23:50.490 [2024-12-09 05:18:32.708073] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:23:50.490 [2024-12-09 05:18:32.708078] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:23:50.490 [2024-12-09 05:18:32.708083] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:23:50.490 [2024-12-09 05:18:32.708436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:23:50.490 [2024-12-09 05:18:32.862701] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:23:50.490 [2024-12-09 05:18:32.933218] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:23:50.749 [2024-12-09 05:18:32.965122] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:23:50.749 [2024-12-09 05:18:32.965387] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:23:51.008 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:23:51.008 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 01:23:51.008 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:23:51.008 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 01:23:51.008 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:23:51.008 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:23:51.008 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=72310 01:23:51.008 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 72310 /var/tmp/bdevperf.sock 01:23:51.008 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72310 ']' 01:23:51.008 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:23:51.008 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 01:23:51.008 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 01:23:51.008 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:23:51.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:23:51.008 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 01:23:51.008 "subsystems": [ 01:23:51.008 { 01:23:51.008 "subsystem": "keyring", 01:23:51.008 "config": [ 01:23:51.008 { 01:23:51.008 "method": "keyring_file_add_key", 01:23:51.008 "params": { 01:23:51.008 "name": "key0", 01:23:51.008 "path": "/tmp/tmp.rund1qqybI" 01:23:51.008 } 01:23:51.008 } 01:23:51.008 ] 01:23:51.008 }, 01:23:51.008 { 01:23:51.008 "subsystem": "iobuf", 01:23:51.008 "config": [ 01:23:51.008 { 01:23:51.008 "method": "iobuf_set_options", 01:23:51.008 "params": { 01:23:51.008 "small_pool_count": 8192, 01:23:51.008 "large_pool_count": 1024, 01:23:51.008 "small_bufsize": 8192, 01:23:51.008 "large_bufsize": 135168, 01:23:51.008 "enable_numa": false 01:23:51.008 } 01:23:51.008 } 01:23:51.008 ] 01:23:51.008 }, 01:23:51.008 { 01:23:51.008 "subsystem": "sock", 01:23:51.008 "config": [ 01:23:51.008 { 01:23:51.008 "method": "sock_set_default_impl", 01:23:51.008 "params": { 01:23:51.008 "impl_name": "uring" 01:23:51.008 } 01:23:51.008 }, 01:23:51.008 { 01:23:51.008 "method": "sock_impl_set_options", 01:23:51.008 "params": { 01:23:51.008 "impl_name": "ssl", 01:23:51.008 "recv_buf_size": 4096, 01:23:51.008 "send_buf_size": 4096, 01:23:51.008 "enable_recv_pipe": true, 01:23:51.008 "enable_quickack": false, 01:23:51.008 "enable_placement_id": 0, 01:23:51.008 "enable_zerocopy_send_server": true, 01:23:51.008 "enable_zerocopy_send_client": false, 01:23:51.008 "zerocopy_threshold": 0, 01:23:51.008 "tls_version": 0, 01:23:51.008 "enable_ktls": false 01:23:51.008 } 01:23:51.008 }, 01:23:51.008 { 01:23:51.008 "method": "sock_impl_set_options", 01:23:51.008 "params": { 01:23:51.008 "impl_name": "posix", 01:23:51.008 "recv_buf_size": 2097152, 01:23:51.008 "send_buf_size": 2097152, 01:23:51.008 "enable_recv_pipe": true, 01:23:51.008 "enable_quickack": false, 01:23:51.008 "enable_placement_id": 0, 01:23:51.008 "enable_zerocopy_send_server": true, 01:23:51.008 "enable_zerocopy_send_client": false, 01:23:51.008 "zerocopy_threshold": 0, 01:23:51.008 "tls_version": 0, 01:23:51.008 "enable_ktls": false 01:23:51.008 } 01:23:51.008 }, 01:23:51.008 { 01:23:51.008 "method": "sock_impl_set_options", 01:23:51.008 "params": { 01:23:51.008 "impl_name": "uring", 01:23:51.008 "recv_buf_size": 2097152, 01:23:51.008 "send_buf_size": 2097152, 01:23:51.008 "enable_recv_pipe": true, 01:23:51.008 "enable_quickack": false, 01:23:51.008 "enable_placement_id": 0, 01:23:51.008 "enable_zerocopy_send_server": false, 01:23:51.008 "enable_zerocopy_send_client": false, 01:23:51.008 "zerocopy_threshold": 0, 01:23:51.008 "tls_version": 0, 01:23:51.008 "enable_ktls": false 01:23:51.008 } 01:23:51.008 } 01:23:51.008 ] 01:23:51.008 }, 01:23:51.008 { 01:23:51.008 "subsystem": "vmd", 01:23:51.008 "config": [] 01:23:51.008 }, 01:23:51.008 { 01:23:51.008 "subsystem": "accel", 01:23:51.008 "config": [ 01:23:51.008 { 01:23:51.008 "method": "accel_set_options", 01:23:51.008 "params": { 01:23:51.008 "small_cache_size": 128, 01:23:51.008 "large_cache_size": 16, 01:23:51.008 "task_count": 2048, 01:23:51.008 "sequence_count": 2048, 01:23:51.008 "buf_count": 2048 01:23:51.008 } 01:23:51.008 } 01:23:51.008 ] 01:23:51.008 }, 01:23:51.008 { 01:23:51.008 "subsystem": "bdev", 01:23:51.008 "config": [ 01:23:51.008 { 01:23:51.008 "method": "bdev_set_options", 01:23:51.008 "params": { 01:23:51.008 "bdev_io_pool_size": 65535, 01:23:51.008 "bdev_io_cache_size": 256, 01:23:51.008 "bdev_auto_examine": true, 01:23:51.008 "iobuf_small_cache_size": 128, 01:23:51.008 "iobuf_large_cache_size": 16 01:23:51.008 } 01:23:51.008 }, 01:23:51.008 { 01:23:51.008 "method": "bdev_raid_set_options", 01:23:51.008 "params": { 01:23:51.008 "process_window_size_kb": 1024, 01:23:51.008 "process_max_bandwidth_mb_sec": 0 01:23:51.008 } 01:23:51.008 }, 01:23:51.008 { 01:23:51.008 "method": "bdev_iscsi_set_options", 01:23:51.008 "params": { 01:23:51.008 "timeout_sec": 30 01:23:51.008 } 01:23:51.008 }, 01:23:51.008 { 01:23:51.008 "method": "bdev_nvme_set_options", 01:23:51.008 "params": { 01:23:51.008 "action_on_timeout": "none", 01:23:51.008 "timeout_us": 0, 01:23:51.008 "timeout_admin_us": 0, 01:23:51.008 "keep_alive_timeout_ms": 10000, 01:23:51.008 "arbitration_burst": 0, 01:23:51.008 "low_priority_weight": 0, 01:23:51.008 "medium_priority_weight": 0, 01:23:51.008 "high_priority_weight": 0, 01:23:51.008 "nvme_adminq_poll_period_us": 10000, 01:23:51.008 "nvme_ioq_poll_period_us": 0, 01:23:51.008 "io_queue_requests": 512, 01:23:51.008 "delay_cmd_submit": true, 01:23:51.008 "transport_retry_count": 4, 01:23:51.008 "bdev_retry_count": 3, 01:23:51.008 "transport_ack_timeout": 0, 01:23:51.008 "ctrlr_loss_timeout_sec": 0, 01:23:51.008 "reconnect_delay_sec": 0, 01:23:51.009 "fast_io_fail_timeout_sec": 0, 01:23:51.009 "disable_auto_failback": false, 01:23:51.009 "generate_uuids": false, 01:23:51.009 "transport_tos": 0, 01:23:51.009 "nvme_error_stat": false, 01:23:51.009 "rdma_srq_size": 0, 01:23:51.009 "io_path_stat": false, 01:23:51.009 "allow_accel_sequence": false, 01:23:51.009 "rdma_max_cq_size": 0, 01:23:51.009 "rdma_cm_event_timeout_ms": 0, 01:23:51.009 "dhchap_digests": [ 01:23:51.009 "sha256", 01:23:51.009 "sha384", 01:23:51.009 "sha512" 01:23:51.009 ], 01:23:51.009 "dhchap_dhgroups": [ 01:23:51.009 "null", 01:23:51.009 "ffdhe2048", 01:23:51.009 "ffdhe3072", 01:23:51.009 "ffdhe4096", 01:23:51.009 "ffdhe6144", 01:23:51.009 "ffdhe8192" 01:23:51.009 ] 01:23:51.009 } 01:23:51.009 }, 01:23:51.009 { 01:23:51.009 "method": "bdev_nvme_attach_controller", 01:23:51.009 "params": { 01:23:51.009 "name": "nvme0", 01:23:51.009 "trtype": "TCP", 01:23:51.009 "adrfam": "IPv4", 01:23:51.009 "traddr": "10.0.0.3", 01:23:51.009 "trsvcid": "4420", 01:23:51.009 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:23:51.009 "prchk_reftag": false, 01:23:51.009 "prchk_guard": false, 01:23:51.009 "ctrlr_loss_timeout_sec": 0, 01:23:51.009 "reconnect_delay_sec": 0, 01:23:51.009 "fast_io_fail_timeout_sec": 0, 01:23:51.009 "psk": "key0", 01:23:51.009 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:23:51.009 "hdgst": false, 01:23:51.009 "ddgst": false, 01:23:51.009 "multipath": "multipath" 01:23:51.009 } 01:23:51.009 }, 01:23:51.009 { 01:23:51.009 "method": "bdev_nvme_set_hotplug", 01:23:51.009 "params": { 01:23:51.009 "period_us": 100000, 01:23:51.009 "enable": false 01:23:51.009 } 01:23:51.009 }, 01:23:51.009 { 01:23:51.009 "method": "bdev_enable_histogram", 01:23:51.009 "params": { 01:23:51.009 "name": "nvme0n1", 01:23:51.009 "enable": true 01:23:51.009 } 01:23:51.009 }, 01:23:51.009 { 01:23:51.009 "method": "bdev_wait_for_examine" 01:23:51.009 } 01:23:51.009 ] 01:23:51.009 }, 01:23:51.009 { 01:23:51.009 "subsystem": "nbd", 01:23:51.009 "config": [] 01:23:51.009 } 01:23:51.009 ] 01:23:51.009 }' 01:23:51.009 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 01:23:51.009 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:23:51.267 [2024-12-09 05:18:33.475372] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:23:51.267 [2024-12-09 05:18:33.475507] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72310 ] 01:23:51.267 [2024-12-09 05:18:33.628406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:23:51.267 [2024-12-09 05:18:33.678375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:23:51.526 [2024-12-09 05:18:33.799354] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:23:51.526 [2024-12-09 05:18:33.842347] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:23:52.094 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:23:52.094 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 01:23:52.094 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 01:23:52.094 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 01:23:52.352 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:23:52.352 05:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:23:52.352 Running I/O for 1 seconds... 01:23:53.288 5649.00 IOPS, 22.07 MiB/s 01:23:53.288 Latency(us) 01:23:53.288 [2024-12-09T05:18:35.744Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:23:53.288 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 01:23:53.288 Verification LBA range: start 0x0 length 0x2000 01:23:53.288 nvme0n1 : 1.02 5664.59 22.13 0.00 0.00 22369.11 5323.01 31823.59 01:23:53.289 [2024-12-09T05:18:35.745Z] =================================================================================================================== 01:23:53.289 [2024-12-09T05:18:35.745Z] Total : 5664.59 22.13 0.00 0.00 22369.11 5323.01 31823.59 01:23:53.289 { 01:23:53.289 "results": [ 01:23:53.289 { 01:23:53.289 "job": "nvme0n1", 01:23:53.289 "core_mask": "0x2", 01:23:53.289 "workload": "verify", 01:23:53.289 "status": "finished", 01:23:53.289 "verify_range": { 01:23:53.289 "start": 0, 01:23:53.289 "length": 8192 01:23:53.289 }, 01:23:53.289 "queue_depth": 128, 01:23:53.289 "io_size": 4096, 01:23:53.289 "runtime": 1.019845, 01:23:53.289 "iops": 5664.586285170786, 01:23:53.289 "mibps": 22.127290176448383, 01:23:53.289 "io_failed": 0, 01:23:53.289 "io_timeout": 0, 01:23:53.289 "avg_latency_us": 22369.114522352986, 01:23:53.289 "min_latency_us": 5323.01135371179, 01:23:53.289 "max_latency_us": 31823.594759825326 01:23:53.289 } 01:23:53.289 ], 01:23:53.289 "core_count": 1 01:23:53.289 } 01:23:53.289 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 01:23:53.289 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 01:23:53.289 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 01:23:53.289 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 01:23:53.289 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 01:23:53.289 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 01:23:53.289 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 01:23:53.289 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 01:23:53.289 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 01:23:53.289 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 01:23:53.289 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 01:23:53.289 nvmf_trace.0 01:23:53.547 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 01:23:53.547 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 72310 01:23:53.547 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72310 ']' 01:23:53.547 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72310 01:23:53.547 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 01:23:53.547 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:23:53.547 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72310 01:23:53.547 killing process with pid 72310 01:23:53.547 Received shutdown signal, test time was about 1.000000 seconds 01:23:53.547 01:23:53.547 Latency(us) 01:23:53.547 [2024-12-09T05:18:36.003Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:23:53.547 [2024-12-09T05:18:36.003Z] =================================================================================================================== 01:23:53.547 [2024-12-09T05:18:36.003Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:23:53.547 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:23:53.547 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:23:53.547 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72310' 01:23:53.547 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72310 01:23:53.547 05:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72310 01:23:53.806 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 01:23:53.806 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 01:23:53.806 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 01:23:53.806 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:23:53.806 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 01:23:53.806 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 01:23:53.806 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:23:53.806 rmmod nvme_tcp 01:23:53.806 rmmod nvme_fabrics 01:23:53.806 rmmod nvme_keyring 01:23:53.806 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:23:53.806 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 01:23:53.806 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 01:23:53.806 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 72278 ']' 01:23:53.806 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 72278 01:23:53.806 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72278 ']' 01:23:53.806 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72278 01:23:53.806 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 01:23:53.806 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:23:53.806 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72278 01:23:53.806 killing process with pid 72278 01:23:53.807 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:23:53.807 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:23:53.807 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72278' 01:23:53.807 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72278 01:23:53.807 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72278 01:23:54.065 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:23:54.065 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:23:54.065 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:23:54.065 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 01:23:54.065 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 01:23:54.065 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:23:54.065 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 01:23:54.065 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:23:54.065 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:23:54.065 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:23:54.065 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:23:54.066 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:23:54.324 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:23:54.324 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:23:54.324 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:23:54.324 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:23:54.324 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:23:54.324 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:23:54.324 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:23:54.324 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:23:54.324 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:23:54.324 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:23:54.324 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 01:23:54.324 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:23:54.324 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:23:54.324 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:23:54.324 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 01:23:54.324 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.ye9BBM2pOr /tmp/tmp.amUlndqDMK /tmp/tmp.rund1qqybI 01:23:54.324 ************************************ 01:23:54.324 END TEST nvmf_tls 01:23:54.324 ************************************ 01:23:54.324 01:23:54.324 real 1m24.720s 01:23:54.324 user 2m14.135s 01:23:54.324 sys 0m27.063s 01:23:54.324 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 01:23:54.324 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:23:54.583 05:18:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 01:23:54.583 05:18:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:23:54.583 05:18:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 01:23:54.583 05:18:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 01:23:54.583 ************************************ 01:23:54.583 START TEST nvmf_fips 01:23:54.583 ************************************ 01:23:54.583 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 01:23:54.583 * Looking for test storage... 01:23:54.583 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 01:23:54.583 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:23:54.583 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:23:54.583 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 01:23:54.583 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:23:54.583 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:23:54.583 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 01:23:54.583 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 01:23:54.583 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 01:23:54.583 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 01:23:54.583 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 01:23:54.583 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 01:23:54.583 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 01:23:54.583 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 01:23:54.583 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 01:23:54.583 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:23:54.583 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 01:23:54.584 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 01:23:54.584 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 01:23:54.584 05:18:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:23:54.584 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 01:23:54.584 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 01:23:54.584 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:23:54.584 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 01:23:54.584 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 01:23:54.584 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 01:23:54.584 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 01:23:54.584 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:23:54.584 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 01:23:54.584 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 01:23:54.584 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:23:54.584 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:23:54.584 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 01:23:54.584 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:23:54.584 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:23:54.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:23:54.584 --rc genhtml_branch_coverage=1 01:23:54.584 --rc genhtml_function_coverage=1 01:23:54.584 --rc genhtml_legend=1 01:23:54.584 --rc geninfo_all_blocks=1 01:23:54.584 --rc geninfo_unexecuted_blocks=1 01:23:54.584 01:23:54.584 ' 01:23:54.584 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:23:54.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:23:54.584 --rc genhtml_branch_coverage=1 01:23:54.584 --rc genhtml_function_coverage=1 01:23:54.584 --rc genhtml_legend=1 01:23:54.584 --rc geninfo_all_blocks=1 01:23:54.584 --rc geninfo_unexecuted_blocks=1 01:23:54.584 01:23:54.584 ' 01:23:54.584 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:23:54.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:23:54.584 --rc genhtml_branch_coverage=1 01:23:54.584 --rc genhtml_function_coverage=1 01:23:54.584 --rc genhtml_legend=1 01:23:54.584 --rc geninfo_all_blocks=1 01:23:54.584 --rc geninfo_unexecuted_blocks=1 01:23:54.584 01:23:54.584 ' 01:23:54.584 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:23:54.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:23:54.584 --rc genhtml_branch_coverage=1 01:23:54.584 --rc genhtml_function_coverage=1 01:23:54.584 --rc genhtml_legend=1 01:23:54.584 --rc geninfo_all_blocks=1 01:23:54.584 --rc geninfo_unexecuted_blocks=1 01:23:54.584 01:23:54.584 ' 01:23:54.584 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:23:54.584 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 01:23:54.584 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:23:54.584 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:23:54.584 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:23:54.584 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:23:54.584 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:23:54.584 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:23:54.584 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:23:54.584 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:23:54.584 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:23:54.584 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:23:54.843 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:23:54.844 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 01:23:54.844 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 01:23:54.845 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 01:23:54.845 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 01:23:54.845 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:23:54.845 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 01:23:54.845 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:23:54.845 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 01:23:54.845 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:23:54.845 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 01:23:54.845 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 01:23:54.845 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 01:23:54.845 Error setting digest 01:23:54.845 40A28606607F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 01:23:54.845 40A28606607F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 01:23:54.845 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 01:23:54.845 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:23:54.845 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:23:54.845 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:23:54.845 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 01:23:54.845 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:23:54.845 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:23:54.845 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 01:23:54.845 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 01:23:54.845 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 01:23:54.845 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:23:54.845 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:23:54.845 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:23:54.845 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:23:54.845 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:23:54.845 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:23:54.845 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:23:54.845 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:23:54.845 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@460 -- # nvmf_veth_init 01:23:54.845 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:23:54.845 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:23:54.845 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:23:54.845 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:23:54.845 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:23:54.845 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:23:54.845 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:23:54.845 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:23:54.845 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:23:54.845 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:23:54.845 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:23:54.845 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:23:54.845 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:23:54.845 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:23:54.845 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:23:54.845 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:23:54.845 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:23:54.845 Cannot find device "nvmf_init_br" 01:23:54.845 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 01:23:54.845 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:23:54.845 Cannot find device "nvmf_init_br2" 01:23:54.845 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 01:23:54.845 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:23:54.845 Cannot find device "nvmf_tgt_br" 01:23:54.845 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 01:23:54.845 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:23:55.104 Cannot find device "nvmf_tgt_br2" 01:23:55.104 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 01:23:55.104 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:23:55.104 Cannot find device "nvmf_init_br" 01:23:55.104 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 01:23:55.104 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:23:55.104 Cannot find device "nvmf_init_br2" 01:23:55.104 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 01:23:55.104 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:23:55.104 Cannot find device "nvmf_tgt_br" 01:23:55.104 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 01:23:55.104 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:23:55.104 Cannot find device "nvmf_tgt_br2" 01:23:55.104 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 01:23:55.104 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:23:55.104 Cannot find device "nvmf_br" 01:23:55.104 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 01:23:55.104 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:23:55.104 Cannot find device "nvmf_init_if" 01:23:55.104 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 01:23:55.104 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:23:55.104 Cannot find device "nvmf_init_if2" 01:23:55.104 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 01:23:55.104 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:23:55.104 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:23:55.104 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 01:23:55.104 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:23:55.104 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:23:55.104 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 01:23:55.104 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:23:55.104 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:23:55.104 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:23:55.104 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:23:55.104 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:23:55.104 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:23:55.104 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:23:55.104 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:23:55.104 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:23:55.104 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:23:55.104 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:23:55.104 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:23:55.104 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:23:55.104 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:23:55.104 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:23:55.104 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:23:55.104 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:23:55.104 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:23:55.104 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:23:55.104 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:23:55.104 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:23:55.104 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:23:55.104 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:23:55.104 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:23:55.104 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:23:55.378 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:23:55.378 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:23:55.378 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:23:55.378 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:23:55.378 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:23:55.378 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:23:55.378 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:23:55.378 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:23:55.378 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:23:55.378 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 01:23:55.378 01:23:55.378 --- 10.0.0.3 ping statistics --- 01:23:55.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:23:55.378 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 01:23:55.378 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:23:55.378 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:23:55.378 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.093 ms 01:23:55.378 01:23:55.378 --- 10.0.0.4 ping statistics --- 01:23:55.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:23:55.378 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 01:23:55.378 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:23:55.378 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:23:55.378 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 01:23:55.378 01:23:55.378 --- 10.0.0.1 ping statistics --- 01:23:55.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:23:55.378 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 01:23:55.378 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:23:55.378 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:23:55.378 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 01:23:55.378 01:23:55.378 --- 10.0.0.2 ping statistics --- 01:23:55.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:23:55.378 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 01:23:55.378 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:23:55.378 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@461 -- # return 0 01:23:55.378 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:23:55.378 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:23:55.378 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:23:55.378 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:23:55.378 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:23:55.378 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:23:55.379 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:23:55.379 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 01:23:55.379 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:23:55.379 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 01:23:55.379 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 01:23:55.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:23:55.379 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=72634 01:23:55.379 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 72634 01:23:55.379 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 72634 ']' 01:23:55.379 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:23:55.379 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 01:23:55.379 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:23:55.379 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 01:23:55.379 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 01:23:55.379 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 01:23:55.379 [2024-12-09 05:18:37.710312] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:23:55.379 [2024-12-09 05:18:37.710392] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:23:55.638 [2024-12-09 05:18:37.860033] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:23:55.638 [2024-12-09 05:18:37.934239] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:23:55.638 [2024-12-09 05:18:37.934295] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:23:55.638 [2024-12-09 05:18:37.934301] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:23:55.638 [2024-12-09 05:18:37.934306] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:23:55.638 [2024-12-09 05:18:37.934310] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:23:55.638 [2024-12-09 05:18:37.934670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:23:55.638 [2024-12-09 05:18:38.009279] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:23:56.206 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:23:56.206 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 01:23:56.206 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:23:56.206 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 01:23:56.206 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 01:23:56.206 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:23:56.206 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 01:23:56.206 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 01:23:56.206 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 01:23:56.206 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.93R 01:23:56.206 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 01:23:56.206 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.93R 01:23:56.206 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.93R 01:23:56.206 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.93R 01:23:56.206 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:23:56.464 [2024-12-09 05:18:38.776542] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:23:56.464 [2024-12-09 05:18:38.792435] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:23:56.464 [2024-12-09 05:18:38.792712] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:23:56.464 malloc0 01:23:56.464 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:23:56.464 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=72670 01:23:56.464 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 01:23:56.464 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 72670 /var/tmp/bdevperf.sock 01:23:56.464 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 72670 ']' 01:23:56.464 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:23:56.464 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 01:23:56.464 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:23:56.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:23:56.464 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 01:23:56.464 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 01:23:56.722 [2024-12-09 05:18:38.940116] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:23:56.722 [2024-12-09 05:18:38.940276] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72670 ] 01:23:56.722 [2024-12-09 05:18:39.093779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:23:56.722 [2024-12-09 05:18:39.147114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:23:56.980 [2024-12-09 05:18:39.189028] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:23:57.574 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:23:57.574 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 01:23:57.574 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.93R 01:23:57.574 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 01:23:57.849 [2024-12-09 05:18:40.158403] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:23:57.849 TLSTESTn1 01:23:57.849 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:23:58.108 Running I/O for 10 seconds... 01:23:59.983 5120.00 IOPS, 20.00 MiB/s [2024-12-09T05:18:43.374Z] 5129.00 IOPS, 20.04 MiB/s [2024-12-09T05:18:44.750Z] 5160.00 IOPS, 20.16 MiB/s [2024-12-09T05:18:45.686Z] 5150.50 IOPS, 20.12 MiB/s [2024-12-09T05:18:46.622Z] 5141.80 IOPS, 20.09 MiB/s [2024-12-09T05:18:47.617Z] 5135.00 IOPS, 20.06 MiB/s [2024-12-09T05:18:48.553Z] 5133.14 IOPS, 20.05 MiB/s [2024-12-09T05:18:49.486Z] 5133.50 IOPS, 20.05 MiB/s [2024-12-09T05:18:50.421Z] 5134.22 IOPS, 20.06 MiB/s [2024-12-09T05:18:50.421Z] 5142.00 IOPS, 20.09 MiB/s 01:24:07.965 Latency(us) 01:24:07.965 [2024-12-09T05:18:50.421Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:24:07.965 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 01:24:07.965 Verification LBA range: start 0x0 length 0x2000 01:24:07.965 TLSTESTn1 : 10.02 5143.62 20.09 0.00 0.00 24840.05 7440.77 18773.63 01:24:07.965 [2024-12-09T05:18:50.421Z] =================================================================================================================== 01:24:07.965 [2024-12-09T05:18:50.421Z] Total : 5143.62 20.09 0.00 0.00 24840.05 7440.77 18773.63 01:24:07.965 { 01:24:07.965 "results": [ 01:24:07.965 { 01:24:07.965 "job": "TLSTESTn1", 01:24:07.965 "core_mask": "0x4", 01:24:07.965 "workload": "verify", 01:24:07.965 "status": "finished", 01:24:07.965 "verify_range": { 01:24:07.965 "start": 0, 01:24:07.965 "length": 8192 01:24:07.965 }, 01:24:07.965 "queue_depth": 128, 01:24:07.965 "io_size": 4096, 01:24:07.965 "runtime": 10.021158, 01:24:07.965 "iops": 5143.61713486605, 01:24:07.966 "mibps": 20.092254433070508, 01:24:07.966 "io_failed": 0, 01:24:07.966 "io_timeout": 0, 01:24:07.966 "avg_latency_us": 24840.052842485962, 01:24:07.966 "min_latency_us": 7440.768558951965, 01:24:07.966 "max_latency_us": 18773.631441048034 01:24:07.966 } 01:24:07.966 ], 01:24:07.966 "core_count": 1 01:24:07.966 } 01:24:07.966 05:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 01:24:07.966 05:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 01:24:07.966 05:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 01:24:07.966 05:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 01:24:07.966 05:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 01:24:07.966 05:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 01:24:07.966 05:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 01:24:07.966 05:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 01:24:07.966 05:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 01:24:07.966 05:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 01:24:07.966 nvmf_trace.0 01:24:08.225 05:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 01:24:08.225 05:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 72670 01:24:08.225 05:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 72670 ']' 01:24:08.225 05:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 72670 01:24:08.225 05:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 01:24:08.225 05:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:24:08.225 05:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72670 01:24:08.225 05:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 01:24:08.225 05:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 01:24:08.225 05:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72670' 01:24:08.225 killing process with pid 72670 01:24:08.225 05:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 72670 01:24:08.225 Received shutdown signal, test time was about 10.000000 seconds 01:24:08.225 01:24:08.225 Latency(us) 01:24:08.225 [2024-12-09T05:18:50.681Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:24:08.225 [2024-12-09T05:18:50.681Z] =================================================================================================================== 01:24:08.225 [2024-12-09T05:18:50.681Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:24:08.225 05:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 72670 01:24:08.486 05:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 01:24:08.486 05:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 01:24:08.486 05:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 01:24:08.486 05:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:24:08.486 05:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 01:24:08.486 05:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 01:24:08.486 05:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:24:08.486 rmmod nvme_tcp 01:24:08.486 rmmod nvme_fabrics 01:24:08.486 rmmod nvme_keyring 01:24:08.486 05:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:24:08.486 05:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 01:24:08.486 05:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 01:24:08.486 05:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 72634 ']' 01:24:08.486 05:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 72634 01:24:08.486 05:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 72634 ']' 01:24:08.486 05:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 72634 01:24:08.486 05:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 01:24:08.486 05:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:24:08.486 05:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72634 01:24:08.486 killing process with pid 72634 01:24:08.486 05:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:24:08.486 05:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:24:08.486 05:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72634' 01:24:08.486 05:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 72634 01:24:08.486 05:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 72634 01:24:09.070 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:24:09.070 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:24:09.070 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:24:09.070 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 01:24:09.070 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 01:24:09.070 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:24:09.070 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 01:24:09.070 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:24:09.070 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:24:09.070 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:24:09.070 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:24:09.070 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:24:09.070 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:24:09.070 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:24:09.070 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:24:09.070 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:24:09.070 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:24:09.070 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:24:09.070 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:24:09.070 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:24:09.070 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:24:09.070 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:24:09.070 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 01:24:09.070 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:24:09.070 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:24:09.070 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:24:09.070 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 01:24:09.070 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.93R 01:24:09.070 01:24:09.070 real 0m14.703s 01:24:09.070 user 0m18.435s 01:24:09.070 sys 0m6.717s 01:24:09.070 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 01:24:09.070 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 01:24:09.070 ************************************ 01:24:09.070 END TEST nvmf_fips 01:24:09.070 ************************************ 01:24:09.329 05:18:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 01:24:09.329 05:18:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:24:09.329 05:18:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 01:24:09.329 05:18:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 01:24:09.329 ************************************ 01:24:09.329 START TEST nvmf_control_msg_list 01:24:09.329 ************************************ 01:24:09.329 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 01:24:09.329 * Looking for test storage... 01:24:09.329 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:24:09.329 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:24:09.329 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 01:24:09.329 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:24:09.329 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:24:09.329 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:24:09.329 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 01:24:09.330 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 01:24:09.330 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 01:24:09.330 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 01:24:09.330 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 01:24:09.330 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 01:24:09.330 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 01:24:09.330 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 01:24:09.330 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 01:24:09.330 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:24:09.330 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 01:24:09.330 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 01:24:09.330 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 01:24:09.330 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:24:09.330 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 01:24:09.330 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 01:24:09.330 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:24:09.330 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 01:24:09.330 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 01:24:09.330 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 01:24:09.330 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 01:24:09.330 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:24:09.330 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 01:24:09.590 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 01:24:09.590 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:24:09.590 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:24:09.590 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 01:24:09.590 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:24:09.590 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:24:09.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:24:09.590 --rc genhtml_branch_coverage=1 01:24:09.590 --rc genhtml_function_coverage=1 01:24:09.590 --rc genhtml_legend=1 01:24:09.590 --rc geninfo_all_blocks=1 01:24:09.590 --rc geninfo_unexecuted_blocks=1 01:24:09.590 01:24:09.590 ' 01:24:09.590 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:24:09.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:24:09.590 --rc genhtml_branch_coverage=1 01:24:09.590 --rc genhtml_function_coverage=1 01:24:09.590 --rc genhtml_legend=1 01:24:09.590 --rc geninfo_all_blocks=1 01:24:09.590 --rc geninfo_unexecuted_blocks=1 01:24:09.590 01:24:09.590 ' 01:24:09.590 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:24:09.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:24:09.590 --rc genhtml_branch_coverage=1 01:24:09.590 --rc genhtml_function_coverage=1 01:24:09.590 --rc genhtml_legend=1 01:24:09.590 --rc geninfo_all_blocks=1 01:24:09.590 --rc geninfo_unexecuted_blocks=1 01:24:09.590 01:24:09.590 ' 01:24:09.590 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:24:09.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:24:09.590 --rc genhtml_branch_coverage=1 01:24:09.590 --rc genhtml_function_coverage=1 01:24:09.590 --rc genhtml_legend=1 01:24:09.590 --rc geninfo_all_blocks=1 01:24:09.590 --rc geninfo_unexecuted_blocks=1 01:24:09.590 01:24:09.590 ' 01:24:09.590 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:24:09.590 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 01:24:09.590 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:24:09.590 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:24:09.590 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:24:09.590 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:24:09.590 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:24:09.590 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:24:09.590 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:24:09.590 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:24:09.590 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:24:09.590 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:24:09.590 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:24:09.590 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:24:09.590 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:24:09.590 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:24:09.590 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:24:09.590 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:24:09.590 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:24:09.590 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 01:24:09.590 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:24:09.590 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:24:09.590 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:24:09.590 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:24:09.590 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:24:09.590 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:24:09.590 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 01:24:09.590 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:24:09.590 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 01:24:09.590 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:24:09.590 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:24:09.590 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:24:09.590 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:24:09.590 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:24:09.590 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:24:09.590 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:24:09.590 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:24:09.590 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:24:09.590 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 01:24:09.590 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 01:24:09.590 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:24:09.590 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:24:09.590 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 01:24:09.591 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 01:24:09.591 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 01:24:09.591 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:24:09.591 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:24:09.591 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:24:09.591 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:24:09.591 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:24:09.591 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:24:09.591 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:24:09.591 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:24:09.591 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@460 -- # nvmf_veth_init 01:24:09.591 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:24:09.591 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:24:09.591 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:24:09.591 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:24:09.591 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:24:09.591 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:24:09.591 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:24:09.591 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:24:09.591 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:24:09.591 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:24:09.591 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:24:09.591 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:24:09.591 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:24:09.591 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:24:09.591 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:24:09.591 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:24:09.591 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:24:09.591 Cannot find device "nvmf_init_br" 01:24:09.591 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 01:24:09.591 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:24:09.591 Cannot find device "nvmf_init_br2" 01:24:09.591 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 01:24:09.591 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:24:09.591 Cannot find device "nvmf_tgt_br" 01:24:09.591 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 01:24:09.591 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:24:09.591 Cannot find device "nvmf_tgt_br2" 01:24:09.591 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 01:24:09.591 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:24:09.591 Cannot find device "nvmf_init_br" 01:24:09.591 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 01:24:09.591 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:24:09.591 Cannot find device "nvmf_init_br2" 01:24:09.591 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 01:24:09.591 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:24:09.591 Cannot find device "nvmf_tgt_br" 01:24:09.591 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 01:24:09.591 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:24:09.591 Cannot find device "nvmf_tgt_br2" 01:24:09.591 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 01:24:09.591 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:24:09.591 Cannot find device "nvmf_br" 01:24:09.591 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 01:24:09.591 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:24:09.591 Cannot find device "nvmf_init_if" 01:24:09.591 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 01:24:09.591 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:24:09.591 Cannot find device "nvmf_init_if2" 01:24:09.591 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 01:24:09.591 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:24:09.591 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:24:09.591 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 01:24:09.591 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:24:09.591 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:24:09.591 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 01:24:09.591 05:18:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:24:09.591 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:24:09.591 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:24:09.591 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:24:09.591 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:24:09.591 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:24:09.851 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:24:09.851 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:24:09.851 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:24:09.851 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:24:09.851 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:24:09.851 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:24:09.851 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:24:09.851 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:24:09.851 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:24:09.851 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:24:09.851 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:24:09.851 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:24:09.851 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:24:09.851 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:24:09.851 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:24:09.851 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:24:09.851 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:24:09.851 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:24:09.851 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:24:09.851 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:24:09.851 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:24:09.851 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:24:09.851 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:24:09.851 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:24:09.851 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:24:09.851 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:24:09.851 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:24:09.851 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:24:09.851 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 01:24:09.851 01:24:09.851 --- 10.0.0.3 ping statistics --- 01:24:09.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:24:09.851 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 01:24:09.851 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:24:09.851 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:24:09.851 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.063 ms 01:24:09.851 01:24:09.851 --- 10.0.0.4 ping statistics --- 01:24:09.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:24:09.851 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 01:24:09.851 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:24:09.852 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:24:09.852 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 01:24:09.852 01:24:09.852 --- 10.0.0.1 ping statistics --- 01:24:09.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:24:09.852 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 01:24:09.852 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:24:09.852 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:24:09.852 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 01:24:09.852 01:24:09.852 --- 10.0.0.2 ping statistics --- 01:24:09.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:24:09.852 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 01:24:09.852 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:24:09.852 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@461 -- # return 0 01:24:09.852 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:24:09.852 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:24:09.852 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:24:09.852 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:24:09.852 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:24:09.852 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:24:09.852 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:24:09.852 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 01:24:09.852 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:24:09.852 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 01:24:09.852 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 01:24:09.852 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=73060 01:24:09.852 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 73060 01:24:09.852 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 73060 ']' 01:24:09.852 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:24:09.852 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 01:24:09.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:24:09.852 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:24:09.852 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 01:24:09.852 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 01:24:09.852 05:18:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 01:24:09.852 [2024-12-09 05:18:52.249415] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:24:09.852 [2024-12-09 05:18:52.249481] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:24:10.114 [2024-12-09 05:18:52.400926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:24:10.114 [2024-12-09 05:18:52.450776] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:24:10.114 [2024-12-09 05:18:52.450820] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:24:10.114 [2024-12-09 05:18:52.450826] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:24:10.114 [2024-12-09 05:18:52.450831] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:24:10.114 [2024-12-09 05:18:52.450835] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:24:10.114 [2024-12-09 05:18:52.451107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:24:10.114 [2024-12-09 05:18:52.492014] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:24:10.683 05:18:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:24:10.683 05:18:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 01:24:10.683 05:18:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:24:10.683 05:18:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 01:24:10.683 05:18:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 01:24:10.942 05:18:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:24:10.942 05:18:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 01:24:10.942 05:18:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 01:24:10.942 05:18:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 01:24:10.942 05:18:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:10.942 05:18:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 01:24:10.942 [2024-12-09 05:18:53.181664] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:24:10.942 05:18:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:10.942 05:18:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 01:24:10.942 05:18:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:10.942 05:18:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 01:24:10.942 05:18:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:10.942 05:18:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 01:24:10.942 05:18:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:10.942 05:18:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 01:24:10.942 Malloc0 01:24:10.942 05:18:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:10.942 05:18:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 01:24:10.942 05:18:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:10.942 05:18:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 01:24:10.942 05:18:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:10.942 05:18:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 01:24:10.942 05:18:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:10.942 05:18:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 01:24:10.942 [2024-12-09 05:18:53.240054] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:24:10.942 05:18:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:10.942 05:18:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 01:24:10.942 05:18:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=73092 01:24:10.942 05:18:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=73093 01:24:10.942 05:18:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 01:24:10.942 05:18:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=73094 01:24:10.942 05:18:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 01:24:10.942 05:18:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 73092 01:24:11.200 [2024-12-09 05:18:53.410121] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 01:24:11.200 [2024-12-09 05:18:53.440009] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 01:24:11.200 [2024-12-09 05:18:53.440173] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 01:24:12.137 Initializing NVMe Controllers 01:24:12.137 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 01:24:12.137 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 01:24:12.137 Initialization complete. Launching workers. 01:24:12.137 ======================================================== 01:24:12.138 Latency(us) 01:24:12.138 Device Information : IOPS MiB/s Average min max 01:24:12.138 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 5145.00 20.10 194.16 83.24 760.29 01:24:12.138 ======================================================== 01:24:12.138 Total : 5145.00 20.10 194.16 83.24 760.29 01:24:12.138 01:24:12.138 Initializing NVMe Controllers 01:24:12.138 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 01:24:12.138 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 01:24:12.138 Initialization complete. Launching workers. 01:24:12.138 ======================================================== 01:24:12.138 Latency(us) 01:24:12.138 Device Information : IOPS MiB/s Average min max 01:24:12.138 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 5037.98 19.68 198.27 106.53 730.21 01:24:12.138 ======================================================== 01:24:12.138 Total : 5037.98 19.68 198.27 106.53 730.21 01:24:12.138 01:24:12.138 Initializing NVMe Controllers 01:24:12.138 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 01:24:12.138 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 01:24:12.138 Initialization complete. Launching workers. 01:24:12.138 ======================================================== 01:24:12.138 Latency(us) 01:24:12.138 Device Information : IOPS MiB/s Average min max 01:24:12.138 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 5059.00 19.76 197.46 89.85 775.52 01:24:12.138 ======================================================== 01:24:12.138 Total : 5059.00 19.76 197.46 89.85 775.52 01:24:12.138 01:24:12.138 05:18:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 73093 01:24:12.138 05:18:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 73094 01:24:12.138 05:18:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 01:24:12.138 05:18:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 01:24:12.138 05:18:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 01:24:12.138 05:18:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 01:24:12.138 05:18:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:24:12.138 05:18:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 01:24:12.138 05:18:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 01:24:12.138 05:18:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:24:12.138 rmmod nvme_tcp 01:24:12.395 rmmod nvme_fabrics 01:24:12.395 rmmod nvme_keyring 01:24:12.395 05:18:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:24:12.395 05:18:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 01:24:12.395 05:18:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 01:24:12.395 05:18:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 73060 ']' 01:24:12.395 05:18:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 73060 01:24:12.395 05:18:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 73060 ']' 01:24:12.395 05:18:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 73060 01:24:12.395 05:18:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 01:24:12.395 05:18:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:24:12.395 05:18:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73060 01:24:12.395 05:18:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:24:12.395 05:18:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:24:12.395 killing process with pid 73060 01:24:12.395 05:18:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73060' 01:24:12.395 05:18:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 73060 01:24:12.395 05:18:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 73060 01:24:12.652 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:24:12.652 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:24:12.652 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:24:12.652 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 01:24:12.652 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 01:24:12.652 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:24:12.652 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 01:24:12.652 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:24:12.652 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:24:12.652 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:24:12.652 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:24:12.652 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:24:12.652 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:24:12.652 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:24:12.909 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:24:12.909 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:24:12.909 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:24:12.909 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:24:12.909 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:24:12.909 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:24:12.909 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:24:12.909 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:24:12.909 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 01:24:12.909 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:24:12.909 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:24:12.909 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:24:12.909 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 01:24:12.909 01:24:12.909 real 0m3.730s 01:24:12.909 user 0m5.721s 01:24:12.909 sys 0m1.488s 01:24:12.909 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 01:24:12.909 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 01:24:12.909 ************************************ 01:24:12.909 END TEST nvmf_control_msg_list 01:24:12.909 ************************************ 01:24:13.168 05:18:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 01:24:13.168 05:18:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:24:13.168 05:18:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 01:24:13.168 05:18:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 01:24:13.168 ************************************ 01:24:13.168 START TEST nvmf_wait_for_buf 01:24:13.168 ************************************ 01:24:13.168 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 01:24:13.168 * Looking for test storage... 01:24:13.168 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:24:13.168 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:24:13.168 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 01:24:13.168 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:24:13.168 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:24:13.168 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:24:13.168 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 01:24:13.168 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 01:24:13.168 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 01:24:13.168 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 01:24:13.168 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 01:24:13.168 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 01:24:13.168 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 01:24:13.168 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 01:24:13.168 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 01:24:13.168 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:24:13.168 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 01:24:13.168 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 01:24:13.168 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 01:24:13.168 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:24:13.168 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 01:24:13.168 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 01:24:13.168 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:24:13.168 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 01:24:13.168 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 01:24:13.168 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 01:24:13.168 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 01:24:13.168 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:24:13.168 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 01:24:13.168 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 01:24:13.168 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:24:13.168 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:24:13.168 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 01:24:13.168 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:24:13.168 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:24:13.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:24:13.168 --rc genhtml_branch_coverage=1 01:24:13.168 --rc genhtml_function_coverage=1 01:24:13.168 --rc genhtml_legend=1 01:24:13.168 --rc geninfo_all_blocks=1 01:24:13.168 --rc geninfo_unexecuted_blocks=1 01:24:13.168 01:24:13.168 ' 01:24:13.168 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:24:13.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:24:13.168 --rc genhtml_branch_coverage=1 01:24:13.168 --rc genhtml_function_coverage=1 01:24:13.168 --rc genhtml_legend=1 01:24:13.168 --rc geninfo_all_blocks=1 01:24:13.168 --rc geninfo_unexecuted_blocks=1 01:24:13.168 01:24:13.168 ' 01:24:13.168 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:24:13.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:24:13.168 --rc genhtml_branch_coverage=1 01:24:13.168 --rc genhtml_function_coverage=1 01:24:13.168 --rc genhtml_legend=1 01:24:13.168 --rc geninfo_all_blocks=1 01:24:13.168 --rc geninfo_unexecuted_blocks=1 01:24:13.168 01:24:13.168 ' 01:24:13.168 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:24:13.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:24:13.168 --rc genhtml_branch_coverage=1 01:24:13.168 --rc genhtml_function_coverage=1 01:24:13.168 --rc genhtml_legend=1 01:24:13.168 --rc geninfo_all_blocks=1 01:24:13.168 --rc geninfo_unexecuted_blocks=1 01:24:13.168 01:24:13.168 ' 01:24:13.168 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:24:13.168 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 01:24:13.168 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:24:13.168 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:24:13.168 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:24:13.168 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:24:13.168 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:24:13.168 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:24:13.168 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:24:13.168 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:24:13.168 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:24:13.168 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:24:13.427 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:24:13.427 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:24:13.427 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:24:13.427 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:24:13.427 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:24:13.427 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:24:13.427 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:24:13.427 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 01:24:13.427 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:24:13.427 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:24:13.427 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:24:13.427 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:24:13.427 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:24:13.428 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:24:13.428 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 01:24:13.428 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:24:13.428 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 01:24:13.428 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:24:13.428 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:24:13.428 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:24:13.428 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:24:13.428 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:24:13.428 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:24:13.428 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:24:13.428 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:24:13.428 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:24:13.428 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 01:24:13.428 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 01:24:13.428 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:24:13.428 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:24:13.428 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 01:24:13.428 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 01:24:13.428 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 01:24:13.428 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:24:13.428 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:24:13.428 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:24:13.428 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:24:13.428 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:24:13.428 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:24:13.428 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:24:13.428 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:24:13.428 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@460 -- # nvmf_veth_init 01:24:13.428 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:24:13.428 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:24:13.428 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:24:13.428 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:24:13.428 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:24:13.428 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:24:13.428 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:24:13.428 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:24:13.428 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:24:13.428 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:24:13.428 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:24:13.428 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:24:13.428 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:24:13.428 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:24:13.428 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:24:13.428 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:24:13.428 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:24:13.428 Cannot find device "nvmf_init_br" 01:24:13.428 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 01:24:13.428 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:24:13.428 Cannot find device "nvmf_init_br2" 01:24:13.428 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 01:24:13.428 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:24:13.428 Cannot find device "nvmf_tgt_br" 01:24:13.428 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 01:24:13.428 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:24:13.428 Cannot find device "nvmf_tgt_br2" 01:24:13.428 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 01:24:13.428 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:24:13.428 Cannot find device "nvmf_init_br" 01:24:13.428 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 01:24:13.428 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:24:13.428 Cannot find device "nvmf_init_br2" 01:24:13.428 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 01:24:13.428 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:24:13.428 Cannot find device "nvmf_tgt_br" 01:24:13.428 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 01:24:13.428 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:24:13.428 Cannot find device "nvmf_tgt_br2" 01:24:13.428 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 01:24:13.428 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:24:13.428 Cannot find device "nvmf_br" 01:24:13.428 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 01:24:13.428 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:24:13.428 Cannot find device "nvmf_init_if" 01:24:13.428 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 01:24:13.428 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:24:13.428 Cannot find device "nvmf_init_if2" 01:24:13.428 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 01:24:13.428 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:24:13.428 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:24:13.428 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 01:24:13.428 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:24:13.428 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:24:13.428 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 01:24:13.428 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:24:13.428 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:24:13.428 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:24:13.428 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:24:13.686 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:24:13.686 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:24:13.686 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:24:13.686 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:24:13.686 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:24:13.686 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:24:13.686 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:24:13.686 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:24:13.687 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:24:13.687 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:24:13.687 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:24:13.687 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:24:13.687 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:24:13.687 05:18:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:24:13.687 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:24:13.687 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:24:13.687 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:24:13.687 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:24:13.687 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:24:13.687 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:24:13.687 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:24:13.687 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:24:13.687 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:24:13.687 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:24:13.687 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:24:13.687 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:24:13.687 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:24:13.687 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:24:13.687 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:24:13.687 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:24:13.687 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 01:24:13.687 01:24:13.687 --- 10.0.0.3 ping statistics --- 01:24:13.687 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:24:13.687 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 01:24:13.687 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:24:13.687 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:24:13.687 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.071 ms 01:24:13.687 01:24:13.687 --- 10.0.0.4 ping statistics --- 01:24:13.687 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:24:13.687 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 01:24:13.687 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:24:13.687 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:24:13.687 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 01:24:13.687 01:24:13.687 --- 10.0.0.1 ping statistics --- 01:24:13.687 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:24:13.687 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 01:24:13.687 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:24:13.687 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:24:13.687 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.124 ms 01:24:13.687 01:24:13.687 --- 10.0.0.2 ping statistics --- 01:24:13.687 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:24:13.687 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 01:24:13.687 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:24:13.687 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@461 -- # return 0 01:24:13.687 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:24:13.687 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:24:13.687 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:24:13.687 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:24:13.687 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:24:13.687 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:24:13.687 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:24:13.687 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 01:24:13.687 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:24:13.687 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 01:24:13.687 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 01:24:13.687 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=73334 01:24:13.687 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 01:24:13.687 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 73334 01:24:13.687 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 73334 ']' 01:24:13.687 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:24:13.687 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 01:24:13.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:24:13.687 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:24:13.687 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 01:24:13.687 05:18:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 01:24:13.945 [2024-12-09 05:18:56.177697] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:24:13.945 [2024-12-09 05:18:56.177765] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:24:13.945 [2024-12-09 05:18:56.328763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:24:14.203 [2024-12-09 05:18:56.400082] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:24:14.203 [2024-12-09 05:18:56.400131] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:24:14.203 [2024-12-09 05:18:56.400138] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:24:14.203 [2024-12-09 05:18:56.400143] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:24:14.203 [2024-12-09 05:18:56.400147] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:24:14.203 [2024-12-09 05:18:56.400572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:24:14.770 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:24:14.770 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 01:24:14.770 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:24:14.770 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 01:24:14.770 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 01:24:14.770 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:24:14.770 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 01:24:14.770 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 01:24:14.770 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 01:24:14.770 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:14.770 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 01:24:14.770 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:14.770 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 01:24:14.770 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:14.770 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 01:24:14.770 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:14.770 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 01:24:14.770 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:14.770 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 01:24:14.770 [2024-12-09 05:18:57.176342] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:24:15.029 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:15.029 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 01:24:15.029 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:15.029 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 01:24:15.029 Malloc0 01:24:15.029 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:15.029 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 01:24:15.029 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:15.029 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 01:24:15.030 [2024-12-09 05:18:57.262628] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:24:15.030 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:15.030 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 01:24:15.030 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:15.030 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 01:24:15.030 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:15.030 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 01:24:15.030 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:15.030 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 01:24:15.030 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:15.030 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 01:24:15.030 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:15.030 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 01:24:15.030 [2024-12-09 05:18:57.298659] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:24:15.030 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:15.030 05:18:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 01:24:15.289 [2024-12-09 05:18:57.491453] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 01:24:16.669 Initializing NVMe Controllers 01:24:16.669 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 01:24:16.669 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 01:24:16.669 Initialization complete. Launching workers. 01:24:16.670 ======================================================== 01:24:16.670 Latency(us) 01:24:16.670 Device Information : IOPS MiB/s Average min max 01:24:16.670 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 499.03 62.38 8015.48 7934.38 8144.62 01:24:16.670 ======================================================== 01:24:16.670 Total : 499.03 62.38 8015.48 7934.38 8144.62 01:24:16.670 01:24:16.670 05:18:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 01:24:16.670 05:18:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 01:24:16.670 05:18:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:16.670 05:18:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 01:24:16.670 05:18:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:16.670 05:18:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=4750 01:24:16.670 05:18:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 4750 -eq 0 ]] 01:24:16.670 05:18:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 01:24:16.670 05:18:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 01:24:16.670 05:18:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 01:24:16.670 05:18:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 01:24:16.670 05:18:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:24:16.670 05:18:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 01:24:16.670 05:18:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 01:24:16.670 05:18:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:24:16.670 rmmod nvme_tcp 01:24:16.670 rmmod nvme_fabrics 01:24:16.670 rmmod nvme_keyring 01:24:16.670 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:24:16.670 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 01:24:16.670 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 01:24:16.670 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 73334 ']' 01:24:16.670 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 73334 01:24:16.670 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 73334 ']' 01:24:16.670 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 73334 01:24:16.670 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 01:24:16.670 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:24:16.670 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73334 01:24:16.670 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:24:16.670 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:24:16.670 killing process with pid 73334 01:24:16.670 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73334' 01:24:16.670 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 73334 01:24:16.670 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 73334 01:24:16.929 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:24:16.929 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:24:16.929 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:24:16.929 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 01:24:16.929 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:24:16.929 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 01:24:16.929 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 01:24:16.929 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:24:16.929 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:24:16.929 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:24:17.190 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:24:17.190 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:24:17.190 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:24:17.190 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:24:17.190 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:24:17.190 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:24:17.190 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:24:17.190 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:24:17.190 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:24:17.190 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:24:17.190 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:24:17.190 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:24:17.190 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 01:24:17.190 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:24:17.190 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:24:17.190 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:24:17.190 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 01:24:17.190 01:24:17.190 real 0m4.251s 01:24:17.190 user 0m3.553s 01:24:17.190 sys 0m0.972s 01:24:17.190 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 01:24:17.190 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 01:24:17.190 ************************************ 01:24:17.190 END TEST nvmf_wait_for_buf 01:24:17.190 ************************************ 01:24:17.450 05:18:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 01:24:17.450 05:18:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 01:24:17.450 05:18:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 01:24:17.450 05:18:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:24:17.450 05:18:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 01:24:17.450 05:18:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 01:24:17.450 ************************************ 01:24:17.450 START TEST nvmf_nsid 01:24:17.450 ************************************ 01:24:17.450 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 01:24:17.450 * Looking for test storage... 01:24:17.450 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:24:17.450 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:24:17.450 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 01:24:17.450 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:24:17.450 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:24:17.450 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:24:17.450 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 01:24:17.450 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 01:24:17.450 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 01:24:17.450 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 01:24:17.450 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 01:24:17.450 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 01:24:17.450 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 01:24:17.450 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 01:24:17.450 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 01:24:17.450 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:24:17.450 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 01:24:17.450 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 01:24:17.450 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 01:24:17.450 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:24:17.450 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 01:24:17.450 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 01:24:17.450 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:24:17.450 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 01:24:17.450 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 01:24:17.450 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 01:24:17.450 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 01:24:17.450 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:24:17.450 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 01:24:17.450 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 01:24:17.450 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:24:17.450 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:24:17.450 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 01:24:17.450 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:24:17.450 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:24:17.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:24:17.450 --rc genhtml_branch_coverage=1 01:24:17.450 --rc genhtml_function_coverage=1 01:24:17.450 --rc genhtml_legend=1 01:24:17.450 --rc geninfo_all_blocks=1 01:24:17.450 --rc geninfo_unexecuted_blocks=1 01:24:17.450 01:24:17.450 ' 01:24:17.450 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:24:17.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:24:17.450 --rc genhtml_branch_coverage=1 01:24:17.450 --rc genhtml_function_coverage=1 01:24:17.450 --rc genhtml_legend=1 01:24:17.450 --rc geninfo_all_blocks=1 01:24:17.450 --rc geninfo_unexecuted_blocks=1 01:24:17.450 01:24:17.450 ' 01:24:17.450 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:24:17.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:24:17.450 --rc genhtml_branch_coverage=1 01:24:17.450 --rc genhtml_function_coverage=1 01:24:17.450 --rc genhtml_legend=1 01:24:17.450 --rc geninfo_all_blocks=1 01:24:17.450 --rc geninfo_unexecuted_blocks=1 01:24:17.450 01:24:17.450 ' 01:24:17.450 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:24:17.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:24:17.450 --rc genhtml_branch_coverage=1 01:24:17.450 --rc genhtml_function_coverage=1 01:24:17.450 --rc genhtml_legend=1 01:24:17.450 --rc geninfo_all_blocks=1 01:24:17.450 --rc geninfo_unexecuted_blocks=1 01:24:17.450 01:24:17.450 ' 01:24:17.711 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:24:17.711 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 01:24:17.711 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:24:17.711 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:24:17.711 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:24:17.711 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:24:17.711 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:24:17.711 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:24:17.711 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:24:17.711 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:24:17.711 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:24:17.711 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:24:17.711 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:24:17.711 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:24:17.711 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:24:17.711 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:24:17.711 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:24:17.711 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:24:17.711 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:24:17.711 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 01:24:17.711 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:24:17.711 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:24:17.711 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:24:17.711 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:24:17.711 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:24:17.711 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:24:17.711 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 01:24:17.711 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:24:17.711 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 01:24:17.711 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:24:17.711 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:24:17.711 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:24:17.711 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:24:17.711 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:24:17.711 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:24:17.711 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:24:17.711 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:24:17.711 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:24:17.711 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 01:24:17.711 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 01:24:17.711 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 01:24:17.711 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 01:24:17.711 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 01:24:17.711 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 01:24:17.711 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 01:24:17.711 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:24:17.711 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:24:17.711 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 01:24:17.711 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 01:24:17.711 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 01:24:17.711 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:24:17.711 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:24:17.711 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:24:17.711 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:24:17.711 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:24:17.711 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:24:17.711 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:24:17.711 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:24:17.711 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@460 -- # nvmf_veth_init 01:24:17.711 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:24:17.711 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:24:17.711 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:24:17.711 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:24:17.711 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:24:17.711 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:24:17.711 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:24:17.711 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:24:17.711 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:24:17.711 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:24:17.711 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:24:17.711 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:24:17.711 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:24:17.711 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:24:17.711 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:24:17.711 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:24:17.711 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:24:17.711 Cannot find device "nvmf_init_br" 01:24:17.711 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # true 01:24:17.711 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:24:17.711 Cannot find device "nvmf_init_br2" 01:24:17.711 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # true 01:24:17.711 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:24:17.711 Cannot find device "nvmf_tgt_br" 01:24:17.711 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # true 01:24:17.711 05:18:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:24:17.711 Cannot find device "nvmf_tgt_br2" 01:24:17.711 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # true 01:24:17.711 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:24:17.711 Cannot find device "nvmf_init_br" 01:24:17.711 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # true 01:24:17.711 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:24:17.711 Cannot find device "nvmf_init_br2" 01:24:17.711 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # true 01:24:17.712 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:24:17.712 Cannot find device "nvmf_tgt_br" 01:24:17.712 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # true 01:24:17.712 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:24:17.712 Cannot find device "nvmf_tgt_br2" 01:24:17.712 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # true 01:24:17.712 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:24:17.712 Cannot find device "nvmf_br" 01:24:17.712 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # true 01:24:17.712 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:24:17.712 Cannot find device "nvmf_init_if" 01:24:17.712 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # true 01:24:17.712 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:24:17.712 Cannot find device "nvmf_init_if2" 01:24:17.712 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # true 01:24:17.712 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:24:17.712 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:24:17.712 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # true 01:24:17.712 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:24:17.712 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:24:17.712 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # true 01:24:17.712 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:24:17.712 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:24:17.712 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:24:17.712 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:24:17.712 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:24:17.972 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:24:17.972 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:24:17.972 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:24:17.972 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:24:17.972 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:24:17.972 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:24:17.972 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:24:17.972 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:24:17.972 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:24:17.972 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:24:17.972 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:24:17.972 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:24:17.972 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:24:17.972 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:24:17.972 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:24:17.972 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:24:17.972 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:24:17.972 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:24:17.972 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:24:17.972 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:24:17.972 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:24:17.972 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:24:17.972 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:24:17.972 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:24:17.972 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:24:17.972 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:24:17.972 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:24:17.972 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:24:17.972 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:24:17.972 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 01:24:17.972 01:24:17.972 --- 10.0.0.3 ping statistics --- 01:24:17.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:24:17.972 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 01:24:17.972 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:24:17.972 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:24:17.972 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.064 ms 01:24:17.972 01:24:17.972 --- 10.0.0.4 ping statistics --- 01:24:17.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:24:17.972 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 01:24:17.972 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:24:17.972 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:24:17.972 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 01:24:17.972 01:24:17.972 --- 10.0.0.1 ping statistics --- 01:24:17.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:24:17.972 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 01:24:17.972 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:24:17.972 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:24:17.972 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.034 ms 01:24:17.972 01:24:17.972 --- 10.0.0.2 ping statistics --- 01:24:17.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:24:17.972 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 01:24:17.972 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:24:17.972 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@461 -- # return 0 01:24:17.972 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:24:17.972 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:24:17.972 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:24:17.972 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:24:17.972 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:24:17.972 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:24:17.972 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:24:17.972 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 01:24:17.972 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:24:17.972 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 01:24:17.972 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 01:24:17.972 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=73610 01:24:17.972 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 73610 01:24:17.972 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 73610 ']' 01:24:17.972 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 01:24:17.972 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:24:17.972 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 01:24:17.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:24:17.972 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:24:17.972 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 01:24:17.972 05:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 01:24:17.972 [2024-12-09 05:19:00.392207] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:24:17.972 [2024-12-09 05:19:00.392729] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:24:18.232 [2024-12-09 05:19:00.548351] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:24:18.232 [2024-12-09 05:19:00.629881] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:24:18.232 [2024-12-09 05:19:00.630043] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:24:18.232 [2024-12-09 05:19:00.630091] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:24:18.232 [2024-12-09 05:19:00.630118] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:24:18.232 [2024-12-09 05:19:00.630134] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:24:18.232 [2024-12-09 05:19:00.630558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:24:18.491 [2024-12-09 05:19:00.710007] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:24:19.059 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:24:19.059 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 01:24:19.059 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:24:19.059 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 01:24:19.059 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 01:24:19.059 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:24:19.059 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 01:24:19.059 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=73642 01:24:19.059 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 01:24:19.059 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.3 01:24:19.059 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 01:24:19.059 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 01:24:19.059 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 01:24:19.059 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 01:24:19.059 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:24:19.059 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:24:19.059 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:24:19.059 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:24:19.059 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:24:19.059 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:24:19.059 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:24:19.059 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 01:24:19.059 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 01:24:19.059 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=de2eeb99-6ab0-4c33-93ff-ab8e661d1f58 01:24:19.059 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 01:24:19.059 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=b2987518-0235-4a12-8a96-87798e374513 01:24:19.059 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 01:24:19.059 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=9b32aeec-0dfa-43e6-8947-782c65fba2d1 01:24:19.059 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 01:24:19.059 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:19.059 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 01:24:19.059 null0 01:24:19.059 [2024-12-09 05:19:01.345269] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:24:19.059 null1 01:24:19.059 [2024-12-09 05:19:01.345387] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73642 ] 01:24:19.059 null2 01:24:19.059 [2024-12-09 05:19:01.357040] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:24:19.059 [2024-12-09 05:19:01.381151] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:24:19.059 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:19.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 01:24:19.059 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 73642 /var/tmp/tgt2.sock 01:24:19.059 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 73642 ']' 01:24:19.059 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 01:24:19.059 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 01:24:19.060 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 01:24:19.060 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 01:24:19.060 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 01:24:19.060 [2024-12-09 05:19:01.497861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:24:19.318 [2024-12-09 05:19:01.549667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:24:19.318 [2024-12-09 05:19:01.605695] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:24:19.577 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:24:19.577 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 01:24:19.577 05:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 01:24:19.837 [2024-12-09 05:19:02.101657] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:24:19.837 [2024-12-09 05:19:02.117680] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 01:24:19.837 nvme0n1 nvme0n2 01:24:19.837 nvme1n1 01:24:19.837 05:19:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 01:24:19.837 05:19:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 01:24:19.837 05:19:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid=0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:24:20.099 05:19:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 01:24:20.099 05:19:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 01:24:20.099 05:19:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 01:24:20.099 05:19:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 01:24:20.099 05:19:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 01:24:20.099 05:19:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 01:24:20.099 05:19:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 01:24:20.099 05:19:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 01:24:20.099 05:19:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 01:24:20.099 05:19:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 01:24:20.099 05:19:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 01:24:20.099 05:19:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 01:24:20.099 05:19:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 01:24:21.040 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 01:24:21.040 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 01:24:21.040 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 01:24:21.040 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 01:24:21.040 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 01:24:21.040 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid de2eeb99-6ab0-4c33-93ff-ab8e661d1f58 01:24:21.040 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 01:24:21.040 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 01:24:21.040 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 01:24:21.040 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 01:24:21.040 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 01:24:21.041 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=de2eeb996ab04c3393ffab8e661d1f58 01:24:21.041 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo DE2EEB996AB04C3393FFAB8E661D1F58 01:24:21.041 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ DE2EEB996AB04C3393FFAB8E661D1F58 == \D\E\2\E\E\B\9\9\6\A\B\0\4\C\3\3\9\3\F\F\A\B\8\E\6\6\1\D\1\F\5\8 ]] 01:24:21.041 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 01:24:21.041 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 01:24:21.041 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 01:24:21.041 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 01:24:21.041 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 01:24:21.041 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 01:24:21.041 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 01:24:21.041 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid b2987518-0235-4a12-8a96-87798e374513 01:24:21.041 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 01:24:21.041 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 01:24:21.041 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 01:24:21.041 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 01:24:21.041 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 01:24:21.300 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=b298751802354a128a9687798e374513 01:24:21.300 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo B298751802354A128A9687798E374513 01:24:21.300 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ B298751802354A128A9687798E374513 == \B\2\9\8\7\5\1\8\0\2\3\5\4\A\1\2\8\A\9\6\8\7\7\9\8\E\3\7\4\5\1\3 ]] 01:24:21.300 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 01:24:21.300 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 01:24:21.300 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 01:24:21.300 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 01:24:21.300 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 01:24:21.300 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 01:24:21.300 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 01:24:21.300 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 9b32aeec-0dfa-43e6-8947-782c65fba2d1 01:24:21.300 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 01:24:21.300 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 01:24:21.300 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 01:24:21.300 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 01:24:21.300 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 01:24:21.300 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=9b32aeec0dfa43e68947782c65fba2d1 01:24:21.300 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 9B32AEEC0DFA43E68947782C65FBA2D1 01:24:21.300 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 9B32AEEC0DFA43E68947782C65FBA2D1 == \9\B\3\2\A\E\E\C\0\D\F\A\4\3\E\6\8\9\4\7\7\8\2\C\6\5\F\B\A\2\D\1 ]] 01:24:21.300 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 01:24:21.300 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 01:24:21.300 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 01:24:21.300 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 73642 01:24:21.300 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 73642 ']' 01:24:21.300 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 73642 01:24:21.300 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 01:24:21.300 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:24:21.300 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73642 01:24:21.560 killing process with pid 73642 01:24:21.560 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:24:21.560 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:24:21.560 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73642' 01:24:21.560 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 73642 01:24:21.560 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 73642 01:24:21.818 05:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 01:24:21.818 05:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 01:24:21.818 05:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 01:24:21.818 05:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:24:21.818 05:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 01:24:21.818 05:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 01:24:21.818 05:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:24:21.818 rmmod nvme_tcp 01:24:21.818 rmmod nvme_fabrics 01:24:21.818 rmmod nvme_keyring 01:24:21.818 05:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:24:21.818 05:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 01:24:21.818 05:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 01:24:21.818 05:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 73610 ']' 01:24:21.818 05:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 73610 01:24:21.818 05:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 73610 ']' 01:24:21.818 05:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 73610 01:24:21.818 05:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 01:24:21.818 05:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:24:21.818 05:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73610 01:24:22.077 killing process with pid 73610 01:24:22.077 05:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:24:22.077 05:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:24:22.077 05:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73610' 01:24:22.077 05:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 73610 01:24:22.077 05:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 73610 01:24:22.336 05:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:24:22.336 05:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:24:22.336 05:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:24:22.336 05:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 01:24:22.336 05:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 01:24:22.336 05:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:24:22.336 05:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 01:24:22.336 05:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:24:22.336 05:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:24:22.336 05:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:24:22.336 05:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:24:22.336 05:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:24:22.336 05:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:24:22.336 05:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:24:22.336 05:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:24:22.336 05:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:24:22.336 05:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:24:22.336 05:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:24:22.336 05:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:24:22.596 05:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:24:22.596 05:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:24:22.596 05:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:24:22.596 05:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@246 -- # remove_spdk_ns 01:24:22.596 05:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:24:22.596 05:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:24:22.596 05:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:24:22.596 05:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@300 -- # return 0 01:24:22.596 ************************************ 01:24:22.596 END TEST nvmf_nsid 01:24:22.596 ************************************ 01:24:22.596 01:24:22.596 real 0m5.211s 01:24:22.596 user 0m7.054s 01:24:22.596 sys 0m1.917s 01:24:22.596 05:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 01:24:22.596 05:19:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 01:24:22.596 05:19:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 01:24:22.596 ************************************ 01:24:22.596 END TEST nvmf_target_extra 01:24:22.596 ************************************ 01:24:22.596 01:24:22.596 real 4m37.894s 01:24:22.596 user 9m15.765s 01:24:22.596 sys 1m6.382s 01:24:22.596 05:19:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 01:24:22.596 05:19:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 01:24:22.596 05:19:05 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 01:24:22.596 05:19:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:24:22.596 05:19:05 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 01:24:22.596 05:19:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:24:22.596 ************************************ 01:24:22.596 START TEST nvmf_host 01:24:22.596 ************************************ 01:24:22.596 05:19:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 01:24:22.854 * Looking for test storage... 01:24:22.854 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 01:24:22.854 05:19:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:24:22.854 05:19:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 01:24:22.854 05:19:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:24:22.854 05:19:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:24:22.854 05:19:05 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:24:22.854 05:19:05 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 01:24:22.854 05:19:05 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 01:24:22.854 05:19:05 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 01:24:22.854 05:19:05 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 01:24:22.854 05:19:05 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 01:24:22.855 05:19:05 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 01:24:22.855 05:19:05 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 01:24:22.855 05:19:05 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 01:24:22.855 05:19:05 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 01:24:22.855 05:19:05 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:24:22.855 05:19:05 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 01:24:22.855 05:19:05 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 01:24:22.855 05:19:05 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 01:24:22.855 05:19:05 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:24:22.855 05:19:05 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 01:24:22.855 05:19:05 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 01:24:22.855 05:19:05 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:24:22.855 05:19:05 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 01:24:22.855 05:19:05 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 01:24:22.855 05:19:05 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 01:24:22.855 05:19:05 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 01:24:22.855 05:19:05 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:24:22.855 05:19:05 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 01:24:22.855 05:19:05 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 01:24:22.855 05:19:05 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:24:22.855 05:19:05 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:24:22.855 05:19:05 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 01:24:22.855 05:19:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:24:22.855 05:19:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:24:22.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:24:22.855 --rc genhtml_branch_coverage=1 01:24:22.855 --rc genhtml_function_coverage=1 01:24:22.855 --rc genhtml_legend=1 01:24:22.855 --rc geninfo_all_blocks=1 01:24:22.855 --rc geninfo_unexecuted_blocks=1 01:24:22.855 01:24:22.855 ' 01:24:22.855 05:19:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:24:22.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:24:22.855 --rc genhtml_branch_coverage=1 01:24:22.855 --rc genhtml_function_coverage=1 01:24:22.855 --rc genhtml_legend=1 01:24:22.855 --rc geninfo_all_blocks=1 01:24:22.855 --rc geninfo_unexecuted_blocks=1 01:24:22.855 01:24:22.855 ' 01:24:22.855 05:19:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:24:22.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:24:22.855 --rc genhtml_branch_coverage=1 01:24:22.855 --rc genhtml_function_coverage=1 01:24:22.855 --rc genhtml_legend=1 01:24:22.855 --rc geninfo_all_blocks=1 01:24:22.855 --rc geninfo_unexecuted_blocks=1 01:24:22.855 01:24:22.855 ' 01:24:22.855 05:19:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:24:22.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:24:22.855 --rc genhtml_branch_coverage=1 01:24:22.855 --rc genhtml_function_coverage=1 01:24:22.855 --rc genhtml_legend=1 01:24:22.855 --rc geninfo_all_blocks=1 01:24:22.855 --rc geninfo_unexecuted_blocks=1 01:24:22.855 01:24:22.855 ' 01:24:22.855 05:19:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:24:22.855 05:19:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 01:24:22.855 05:19:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:24:22.855 05:19:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:24:22.855 05:19:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:24:22.855 05:19:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:24:22.855 05:19:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:24:22.855 05:19:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:24:22.855 05:19:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:24:22.855 05:19:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:24:22.855 05:19:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:24:22.855 05:19:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:24:22.855 05:19:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:24:22.855 05:19:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:24:22.855 05:19:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:24:22.855 05:19:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:24:22.855 05:19:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:24:22.855 05:19:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:24:22.855 05:19:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:24:22.855 05:19:05 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 01:24:22.855 05:19:05 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:24:22.855 05:19:05 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:24:22.855 05:19:05 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:24:22.855 05:19:05 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:24:22.855 05:19:05 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:24:22.855 05:19:05 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:24:22.855 05:19:05 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 01:24:22.855 05:19:05 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:24:22.855 05:19:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 01:24:22.855 05:19:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:24:22.855 05:19:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:24:22.855 05:19:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:24:22.855 05:19:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:24:22.855 05:19:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:24:22.855 05:19:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:24:22.855 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:24:22.855 05:19:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:24:22.855 05:19:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:24:22.855 05:19:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 01:24:22.855 05:19:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 01:24:22.855 05:19:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 01:24:22.855 05:19:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 01:24:22.855 05:19:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 01:24:22.855 05:19:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:24:22.855 05:19:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 01:24:22.855 05:19:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 01:24:22.855 ************************************ 01:24:22.855 START TEST nvmf_identify 01:24:22.855 ************************************ 01:24:22.855 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 01:24:23.114 * Looking for test storage... 01:24:23.114 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:24:23.114 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:24:23.114 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 01:24:23.114 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:24:23.114 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:24:23.114 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:24:23.114 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 01:24:23.114 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 01:24:23.114 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 01:24:23.114 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 01:24:23.114 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 01:24:23.114 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 01:24:23.114 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 01:24:23.114 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 01:24:23.114 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 01:24:23.114 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:24:23.114 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 01:24:23.114 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 01:24:23.114 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 01:24:23.114 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:24:23.114 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 01:24:23.114 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 01:24:23.114 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:24:23.114 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 01:24:23.114 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 01:24:23.114 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 01:24:23.114 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 01:24:23.114 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:24:23.114 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 01:24:23.114 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 01:24:23.114 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:24:23.114 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:24:23.114 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 01:24:23.114 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:24:23.114 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:24:23.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:24:23.114 --rc genhtml_branch_coverage=1 01:24:23.114 --rc genhtml_function_coverage=1 01:24:23.114 --rc genhtml_legend=1 01:24:23.114 --rc geninfo_all_blocks=1 01:24:23.114 --rc geninfo_unexecuted_blocks=1 01:24:23.114 01:24:23.114 ' 01:24:23.114 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:24:23.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:24:23.114 --rc genhtml_branch_coverage=1 01:24:23.114 --rc genhtml_function_coverage=1 01:24:23.114 --rc genhtml_legend=1 01:24:23.114 --rc geninfo_all_blocks=1 01:24:23.114 --rc geninfo_unexecuted_blocks=1 01:24:23.114 01:24:23.114 ' 01:24:23.114 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:24:23.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:24:23.114 --rc genhtml_branch_coverage=1 01:24:23.114 --rc genhtml_function_coverage=1 01:24:23.114 --rc genhtml_legend=1 01:24:23.114 --rc geninfo_all_blocks=1 01:24:23.114 --rc geninfo_unexecuted_blocks=1 01:24:23.114 01:24:23.114 ' 01:24:23.114 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:24:23.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:24:23.114 --rc genhtml_branch_coverage=1 01:24:23.114 --rc genhtml_function_coverage=1 01:24:23.114 --rc genhtml_legend=1 01:24:23.114 --rc geninfo_all_blocks=1 01:24:23.114 --rc geninfo_unexecuted_blocks=1 01:24:23.114 01:24:23.114 ' 01:24:23.114 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:24:23.114 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 01:24:23.114 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:24:23.114 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:24:23.114 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:24:23.114 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:24:23.114 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:24:23.114 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:24:23.114 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:24:23.114 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:24:23.114 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:24:23.114 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:24:23.114 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:24:23.114 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:24:23.114 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:24:23.114 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:24:23.114 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:24:23.114 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:24:23.114 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:24:23.114 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 01:24:23.114 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:24:23.114 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:24:23.114 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:24:23.114 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:24:23.114 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:24:23.114 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:24:23.115 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 01:24:23.115 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:24:23.115 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 01:24:23.115 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:24:23.115 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:24:23.115 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:24:23.115 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:24:23.115 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:24:23.115 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:24:23.115 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:24:23.115 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:24:23.115 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:24:23.115 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 01:24:23.115 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 01:24:23.115 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:24:23.115 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 01:24:23.115 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:24:23.115 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:24:23.115 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 01:24:23.115 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 01:24:23.115 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 01:24:23.115 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:24:23.115 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:24:23.115 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:24:23.115 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:24:23.115 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:24:23.115 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:24:23.115 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:24:23.416 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:24:23.416 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@460 -- # nvmf_veth_init 01:24:23.416 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:24:23.416 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:24:23.416 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:24:23.416 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:24:23.416 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:24:23.417 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:24:23.417 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:24:23.417 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:24:23.417 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:24:23.417 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:24:23.417 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:24:23.417 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:24:23.417 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:24:23.417 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:24:23.417 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:24:23.417 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:24:23.417 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:24:23.417 Cannot find device "nvmf_init_br" 01:24:23.417 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 01:24:23.417 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:24:23.417 Cannot find device "nvmf_init_br2" 01:24:23.417 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 01:24:23.417 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:24:23.417 Cannot find device "nvmf_tgt_br" 01:24:23.417 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 01:24:23.417 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:24:23.417 Cannot find device "nvmf_tgt_br2" 01:24:23.417 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 01:24:23.417 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:24:23.417 Cannot find device "nvmf_init_br" 01:24:23.417 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 01:24:23.417 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:24:23.417 Cannot find device "nvmf_init_br2" 01:24:23.417 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 01:24:23.417 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:24:23.417 Cannot find device "nvmf_tgt_br" 01:24:23.417 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 01:24:23.417 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:24:23.417 Cannot find device "nvmf_tgt_br2" 01:24:23.417 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 01:24:23.417 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:24:23.417 Cannot find device "nvmf_br" 01:24:23.417 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 01:24:23.417 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:24:23.417 Cannot find device "nvmf_init_if" 01:24:23.417 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 01:24:23.417 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:24:23.417 Cannot find device "nvmf_init_if2" 01:24:23.417 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 01:24:23.417 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:24:23.417 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:24:23.417 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 01:24:23.417 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:24:23.417 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:24:23.417 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 01:24:23.417 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:24:23.417 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:24:23.417 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:24:23.417 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:24:23.417 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:24:23.417 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:24:23.417 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:24:23.417 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:24:23.417 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:24:23.417 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:24:23.417 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:24:23.417 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:24:23.417 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:24:23.417 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:24:23.417 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:24:23.676 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:24:23.676 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:24:23.676 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:24:23.676 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:24:23.676 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:24:23.676 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:24:23.676 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:24:23.676 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:24:23.676 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:24:23.676 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:24:23.676 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:24:23.676 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:24:23.676 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:24:23.676 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:24:23.676 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:24:23.676 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:24:23.676 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:24:23.676 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:24:23.676 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:24:23.676 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.126 ms 01:24:23.676 01:24:23.676 --- 10.0.0.3 ping statistics --- 01:24:23.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:24:23.676 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 01:24:23.676 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:24:23.676 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:24:23.676 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.083 ms 01:24:23.676 01:24:23.676 --- 10.0.0.4 ping statistics --- 01:24:23.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:24:23.676 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 01:24:23.676 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:24:23.676 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:24:23.676 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 01:24:23.676 01:24:23.676 --- 10.0.0.1 ping statistics --- 01:24:23.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:24:23.676 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 01:24:23.676 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:24:23.676 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:24:23.676 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.106 ms 01:24:23.676 01:24:23.676 --- 10.0.0.2 ping statistics --- 01:24:23.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:24:23.676 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 01:24:23.676 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:24:23.676 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@461 -- # return 0 01:24:23.676 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:24:23.676 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:24:23.676 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:24:23.676 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:24:23.676 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:24:23.676 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:24:23.676 05:19:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:24:23.676 05:19:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 01:24:23.676 05:19:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 01:24:23.676 05:19:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:24:23.676 05:19:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=73996 01:24:23.676 05:19:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 01:24:23.676 05:19:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:24:23.676 05:19:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 73996 01:24:23.676 05:19:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 73996 ']' 01:24:23.676 05:19:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:24:23.676 05:19:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 01:24:23.676 05:19:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:24:23.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:24:23.676 05:19:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 01:24:23.676 05:19:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:24:23.676 [2024-12-09 05:19:06.070349] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:24:23.676 [2024-12-09 05:19:06.070477] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:24:23.934 [2024-12-09 05:19:06.225222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:24:23.934 [2024-12-09 05:19:06.279816] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:24:23.934 [2024-12-09 05:19:06.279953] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:24:23.934 [2024-12-09 05:19:06.279992] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:24:23.934 [2024-12-09 05:19:06.280000] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:24:23.934 [2024-12-09 05:19:06.280005] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:24:23.934 [2024-12-09 05:19:06.280936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:24:23.934 [2024-12-09 05:19:06.281129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:24:23.934 [2024-12-09 05:19:06.281254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:24:23.934 [2024-12-09 05:19:06.281256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:24:23.934 [2024-12-09 05:19:06.323353] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:24:24.499 05:19:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:24:24.499 05:19:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 01:24:24.499 05:19:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:24:24.499 05:19:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:24.499 05:19:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:24:24.499 [2024-12-09 05:19:06.939033] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:24:24.499 05:19:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:24.499 05:19:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 01:24:24.499 05:19:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 01:24:24.499 05:19:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:24:24.756 05:19:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 01:24:24.756 05:19:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:24.756 05:19:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:24:24.756 Malloc0 01:24:24.756 05:19:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:24.756 05:19:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:24:24.756 05:19:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:24.757 05:19:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:24:24.757 05:19:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:24.757 05:19:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 01:24:24.757 05:19:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:24.757 05:19:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:24:24.757 05:19:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:24.757 05:19:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:24:24.757 05:19:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:24.757 05:19:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:24:24.757 [2024-12-09 05:19:07.057837] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:24:24.757 05:19:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:24.757 05:19:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 01:24:24.757 05:19:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:24.757 05:19:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:24:24.757 05:19:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:24.757 05:19:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 01:24:24.757 05:19:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:24.757 05:19:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:24:24.757 [ 01:24:24.757 { 01:24:24.757 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 01:24:24.757 "subtype": "Discovery", 01:24:24.757 "listen_addresses": [ 01:24:24.757 { 01:24:24.757 "trtype": "TCP", 01:24:24.757 "adrfam": "IPv4", 01:24:24.757 "traddr": "10.0.0.3", 01:24:24.757 "trsvcid": "4420" 01:24:24.757 } 01:24:24.757 ], 01:24:24.757 "allow_any_host": true, 01:24:24.757 "hosts": [] 01:24:24.757 }, 01:24:24.757 { 01:24:24.757 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:24:24.757 "subtype": "NVMe", 01:24:24.757 "listen_addresses": [ 01:24:24.757 { 01:24:24.757 "trtype": "TCP", 01:24:24.757 "adrfam": "IPv4", 01:24:24.757 "traddr": "10.0.0.3", 01:24:24.757 "trsvcid": "4420" 01:24:24.757 } 01:24:24.757 ], 01:24:24.757 "allow_any_host": true, 01:24:24.757 "hosts": [], 01:24:24.757 "serial_number": "SPDK00000000000001", 01:24:24.757 "model_number": "SPDK bdev Controller", 01:24:24.757 "max_namespaces": 32, 01:24:24.757 "min_cntlid": 1, 01:24:24.757 "max_cntlid": 65519, 01:24:24.757 "namespaces": [ 01:24:24.757 { 01:24:24.757 "nsid": 1, 01:24:24.757 "bdev_name": "Malloc0", 01:24:24.757 "name": "Malloc0", 01:24:24.757 "nguid": "ABCDEF0123456789ABCDEF0123456789", 01:24:24.757 "eui64": "ABCDEF0123456789", 01:24:24.757 "uuid": "cb6a25d6-9a5c-4637-bb85-e5c0fb307978" 01:24:24.757 } 01:24:24.757 ] 01:24:24.757 } 01:24:24.757 ] 01:24:24.757 05:19:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:24.757 05:19:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 01:24:24.757 [2024-12-09 05:19:07.127397] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:24:24.757 [2024-12-09 05:19:07.127445] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74031 ] 01:24:25.020 [2024-12-09 05:19:07.276404] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 01:24:25.020 [2024-12-09 05:19:07.276457] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 01:24:25.020 [2024-12-09 05:19:07.276461] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 01:24:25.020 [2024-12-09 05:19:07.276473] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 01:24:25.020 [2024-12-09 05:19:07.276480] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 01:24:25.020 [2024-12-09 05:19:07.276696] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 01:24:25.020 [2024-12-09 05:19:07.276733] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xadf750 0 01:24:25.020 [2024-12-09 05:19:07.282383] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 01:24:25.020 [2024-12-09 05:19:07.282396] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 01:24:25.020 [2024-12-09 05:19:07.282399] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 01:24:25.020 [2024-12-09 05:19:07.282401] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 01:24:25.020 [2024-12-09 05:19:07.282428] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.020 [2024-12-09 05:19:07.282432] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.020 [2024-12-09 05:19:07.282435] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xadf750) 01:24:25.020 [2024-12-09 05:19:07.282445] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 01:24:25.020 [2024-12-09 05:19:07.282466] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43740, cid 0, qid 0 01:24:25.020 [2024-12-09 05:19:07.290344] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.020 [2024-12-09 05:19:07.290360] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.020 [2024-12-09 05:19:07.290363] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.020 [2024-12-09 05:19:07.290366] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43740) on tqpair=0xadf750 01:24:25.020 [2024-12-09 05:19:07.290373] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 01:24:25.020 [2024-12-09 05:19:07.290379] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 01:24:25.020 [2024-12-09 05:19:07.290399] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 01:24:25.020 [2024-12-09 05:19:07.290413] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.020 [2024-12-09 05:19:07.290417] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.020 [2024-12-09 05:19:07.290420] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xadf750) 01:24:25.020 [2024-12-09 05:19:07.290427] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.020 [2024-12-09 05:19:07.290446] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43740, cid 0, qid 0 01:24:25.020 [2024-12-09 05:19:07.290493] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.020 [2024-12-09 05:19:07.290498] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.020 [2024-12-09 05:19:07.290500] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.020 [2024-12-09 05:19:07.290504] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43740) on tqpair=0xadf750 01:24:25.020 [2024-12-09 05:19:07.290508] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 01:24:25.020 [2024-12-09 05:19:07.290524] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 01:24:25.020 [2024-12-09 05:19:07.290529] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.020 [2024-12-09 05:19:07.290531] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.020 [2024-12-09 05:19:07.290533] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xadf750) 01:24:25.020 [2024-12-09 05:19:07.290538] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.020 [2024-12-09 05:19:07.290549] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43740, cid 0, qid 0 01:24:25.020 [2024-12-09 05:19:07.290588] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.020 [2024-12-09 05:19:07.290592] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.020 [2024-12-09 05:19:07.290594] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.020 [2024-12-09 05:19:07.290596] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43740) on tqpair=0xadf750 01:24:25.020 [2024-12-09 05:19:07.290601] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 01:24:25.020 [2024-12-09 05:19:07.290607] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 01:24:25.020 [2024-12-09 05:19:07.290611] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.020 [2024-12-09 05:19:07.290614] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.020 [2024-12-09 05:19:07.290616] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xadf750) 01:24:25.020 [2024-12-09 05:19:07.290621] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.020 [2024-12-09 05:19:07.290630] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43740, cid 0, qid 0 01:24:25.020 [2024-12-09 05:19:07.290685] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.020 [2024-12-09 05:19:07.290690] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.020 [2024-12-09 05:19:07.290693] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.020 [2024-12-09 05:19:07.290696] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43740) on tqpair=0xadf750 01:24:25.020 [2024-12-09 05:19:07.290700] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 01:24:25.020 [2024-12-09 05:19:07.290707] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.020 [2024-12-09 05:19:07.290710] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.020 [2024-12-09 05:19:07.290713] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xadf750) 01:24:25.020 [2024-12-09 05:19:07.290718] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.020 [2024-12-09 05:19:07.290729] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43740, cid 0, qid 0 01:24:25.020 [2024-12-09 05:19:07.290807] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.020 [2024-12-09 05:19:07.290814] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.020 [2024-12-09 05:19:07.290816] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.020 [2024-12-09 05:19:07.290819] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43740) on tqpair=0xadf750 01:24:25.020 [2024-12-09 05:19:07.290824] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 01:24:25.020 [2024-12-09 05:19:07.290828] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 01:24:25.020 [2024-12-09 05:19:07.290834] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 01:24:25.020 [2024-12-09 05:19:07.290938] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 01:24:25.020 [2024-12-09 05:19:07.290954] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 01:24:25.020 [2024-12-09 05:19:07.290962] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.020 [2024-12-09 05:19:07.290965] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.020 [2024-12-09 05:19:07.290968] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xadf750) 01:24:25.020 [2024-12-09 05:19:07.290973] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.020 [2024-12-09 05:19:07.290986] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43740, cid 0, qid 0 01:24:25.020 [2024-12-09 05:19:07.291024] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.020 [2024-12-09 05:19:07.291030] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.020 [2024-12-09 05:19:07.291032] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.020 [2024-12-09 05:19:07.291035] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43740) on tqpair=0xadf750 01:24:25.020 [2024-12-09 05:19:07.291039] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 01:24:25.020 [2024-12-09 05:19:07.291046] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.020 [2024-12-09 05:19:07.291050] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.020 [2024-12-09 05:19:07.291052] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xadf750) 01:24:25.020 [2024-12-09 05:19:07.291058] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.020 [2024-12-09 05:19:07.291075] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43740, cid 0, qid 0 01:24:25.020 [2024-12-09 05:19:07.291112] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.020 [2024-12-09 05:19:07.291125] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.020 [2024-12-09 05:19:07.291128] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.020 [2024-12-09 05:19:07.291132] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43740) on tqpair=0xadf750 01:24:25.021 [2024-12-09 05:19:07.291135] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 01:24:25.021 [2024-12-09 05:19:07.291139] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 01:24:25.021 [2024-12-09 05:19:07.291144] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 01:24:25.021 [2024-12-09 05:19:07.291152] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 01:24:25.021 [2024-12-09 05:19:07.291159] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.021 [2024-12-09 05:19:07.291162] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xadf750) 01:24:25.021 [2024-12-09 05:19:07.291168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.021 [2024-12-09 05:19:07.291185] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43740, cid 0, qid 0 01:24:25.021 [2024-12-09 05:19:07.291260] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:24:25.021 [2024-12-09 05:19:07.291265] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:24:25.021 [2024-12-09 05:19:07.291268] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:24:25.021 [2024-12-09 05:19:07.291271] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xadf750): datao=0, datal=4096, cccid=0 01:24:25.021 [2024-12-09 05:19:07.291274] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb43740) on tqpair(0xadf750): expected_datao=0, payload_size=4096 01:24:25.021 [2024-12-09 05:19:07.291278] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.021 [2024-12-09 05:19:07.291284] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:24:25.021 [2024-12-09 05:19:07.291288] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:24:25.021 [2024-12-09 05:19:07.291294] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.021 [2024-12-09 05:19:07.291299] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.021 [2024-12-09 05:19:07.291302] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.021 [2024-12-09 05:19:07.291305] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43740) on tqpair=0xadf750 01:24:25.021 [2024-12-09 05:19:07.291311] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 01:24:25.021 [2024-12-09 05:19:07.291314] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 01:24:25.021 [2024-12-09 05:19:07.291317] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 01:24:25.021 [2024-12-09 05:19:07.291343] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 01:24:25.021 [2024-12-09 05:19:07.291348] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 01:24:25.021 [2024-12-09 05:19:07.291352] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 01:24:25.021 [2024-12-09 05:19:07.291365] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 01:24:25.021 [2024-12-09 05:19:07.291370] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.021 [2024-12-09 05:19:07.291374] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.021 [2024-12-09 05:19:07.291376] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xadf750) 01:24:25.021 [2024-12-09 05:19:07.291382] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 01:24:25.021 [2024-12-09 05:19:07.291401] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43740, cid 0, qid 0 01:24:25.021 [2024-12-09 05:19:07.291447] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.021 [2024-12-09 05:19:07.291452] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.021 [2024-12-09 05:19:07.291455] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.021 [2024-12-09 05:19:07.291458] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43740) on tqpair=0xadf750 01:24:25.021 [2024-12-09 05:19:07.291463] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.021 [2024-12-09 05:19:07.291473] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.021 [2024-12-09 05:19:07.291476] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xadf750) 01:24:25.021 [2024-12-09 05:19:07.291481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:24:25.021 [2024-12-09 05:19:07.291486] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.021 [2024-12-09 05:19:07.291489] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.021 [2024-12-09 05:19:07.291492] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xadf750) 01:24:25.021 [2024-12-09 05:19:07.291496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:24:25.021 [2024-12-09 05:19:07.291501] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.021 [2024-12-09 05:19:07.291503] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.021 [2024-12-09 05:19:07.291506] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xadf750) 01:24:25.021 [2024-12-09 05:19:07.291511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:24:25.021 [2024-12-09 05:19:07.291523] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.021 [2024-12-09 05:19:07.291526] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.021 [2024-12-09 05:19:07.291529] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadf750) 01:24:25.021 [2024-12-09 05:19:07.291533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:24:25.021 [2024-12-09 05:19:07.291537] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 01:24:25.021 [2024-12-09 05:19:07.291550] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 01:24:25.021 [2024-12-09 05:19:07.291555] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.021 [2024-12-09 05:19:07.291558] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xadf750) 01:24:25.021 [2024-12-09 05:19:07.291563] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.021 [2024-12-09 05:19:07.291591] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43740, cid 0, qid 0 01:24:25.021 [2024-12-09 05:19:07.291596] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb438c0, cid 1, qid 0 01:24:25.021 [2024-12-09 05:19:07.291601] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43a40, cid 2, qid 0 01:24:25.021 [2024-12-09 05:19:07.291604] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43bc0, cid 3, qid 0 01:24:25.021 [2024-12-09 05:19:07.291608] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43d40, cid 4, qid 0 01:24:25.021 [2024-12-09 05:19:07.291689] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.021 [2024-12-09 05:19:07.291704] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.021 [2024-12-09 05:19:07.291708] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.021 [2024-12-09 05:19:07.291711] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43d40) on tqpair=0xadf750 01:24:25.021 [2024-12-09 05:19:07.291716] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 01:24:25.021 [2024-12-09 05:19:07.291719] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 01:24:25.021 [2024-12-09 05:19:07.291728] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.021 [2024-12-09 05:19:07.291737] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xadf750) 01:24:25.021 [2024-12-09 05:19:07.291743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.021 [2024-12-09 05:19:07.291756] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43d40, cid 4, qid 0 01:24:25.021 [2024-12-09 05:19:07.291816] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:24:25.021 [2024-12-09 05:19:07.291822] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:24:25.021 [2024-12-09 05:19:07.291824] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:24:25.021 [2024-12-09 05:19:07.291827] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xadf750): datao=0, datal=4096, cccid=4 01:24:25.021 [2024-12-09 05:19:07.291838] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb43d40) on tqpair(0xadf750): expected_datao=0, payload_size=4096 01:24:25.021 [2024-12-09 05:19:07.291841] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.021 [2024-12-09 05:19:07.291847] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:24:25.021 [2024-12-09 05:19:07.291850] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:24:25.021 [2024-12-09 05:19:07.291856] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.021 [2024-12-09 05:19:07.291861] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.021 [2024-12-09 05:19:07.291863] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.021 [2024-12-09 05:19:07.291871] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43d40) on tqpair=0xadf750 01:24:25.021 [2024-12-09 05:19:07.291881] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 01:24:25.021 [2024-12-09 05:19:07.291901] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.021 [2024-12-09 05:19:07.291905] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xadf750) 01:24:25.021 [2024-12-09 05:19:07.291910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.021 [2024-12-09 05:19:07.291924] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.021 [2024-12-09 05:19:07.291927] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.021 [2024-12-09 05:19:07.291930] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xadf750) 01:24:25.021 [2024-12-09 05:19:07.291935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 01:24:25.021 [2024-12-09 05:19:07.291957] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43d40, cid 4, qid 0 01:24:25.021 [2024-12-09 05:19:07.291962] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43ec0, cid 5, qid 0 01:24:25.021 [2024-12-09 05:19:07.292055] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:24:25.021 [2024-12-09 05:19:07.292077] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:24:25.021 [2024-12-09 05:19:07.292081] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:24:25.021 [2024-12-09 05:19:07.292084] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xadf750): datao=0, datal=1024, cccid=4 01:24:25.022 [2024-12-09 05:19:07.292087] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb43d40) on tqpair(0xadf750): expected_datao=0, payload_size=1024 01:24:25.022 [2024-12-09 05:19:07.292090] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.022 [2024-12-09 05:19:07.292095] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:24:25.022 [2024-12-09 05:19:07.292098] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:24:25.022 [2024-12-09 05:19:07.292103] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.022 [2024-12-09 05:19:07.292107] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.022 [2024-12-09 05:19:07.292110] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.022 [2024-12-09 05:19:07.292113] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43ec0) on tqpair=0xadf750 01:24:25.022 [2024-12-09 05:19:07.292125] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.022 [2024-12-09 05:19:07.292130] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.022 [2024-12-09 05:19:07.292133] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.022 [2024-12-09 05:19:07.292136] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43d40) on tqpair=0xadf750 01:24:25.022 [2024-12-09 05:19:07.292145] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.022 [2024-12-09 05:19:07.292148] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xadf750) 01:24:25.022 [2024-12-09 05:19:07.292153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.022 [2024-12-09 05:19:07.292168] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43d40, cid 4, qid 0 01:24:25.022 [2024-12-09 05:19:07.292222] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:24:25.022 [2024-12-09 05:19:07.292228] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:24:25.022 [2024-12-09 05:19:07.292230] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:24:25.022 [2024-12-09 05:19:07.292234] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xadf750): datao=0, datal=3072, cccid=4 01:24:25.022 [2024-12-09 05:19:07.292238] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb43d40) on tqpair(0xadf750): expected_datao=0, payload_size=3072 01:24:25.022 [2024-12-09 05:19:07.292241] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.022 [2024-12-09 05:19:07.292253] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:24:25.022 [2024-12-09 05:19:07.292256] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:24:25.022 [2024-12-09 05:19:07.292263] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.022 [2024-12-09 05:19:07.292277] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.022 [2024-12-09 05:19:07.292280] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.022 [2024-12-09 05:19:07.292283] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43d40) on tqpair=0xadf750 01:24:25.022 [2024-12-09 05:19:07.292290] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.022 [2024-12-09 05:19:07.292293] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xadf750) 01:24:25.022 [2024-12-09 05:19:07.292298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.022 [2024-12-09 05:19:07.292313] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43d40, cid 4, qid 0 01:24:25.022 [2024-12-09 05:19:07.292376] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:24:25.022 [2024-12-09 05:19:07.292382] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:24:25.022 [2024-12-09 05:19:07.292385] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:24:25.022 [2024-12-09 05:19:07.292387] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xadf750): datao=0, datal=8, cccid=4 01:24:25.022 [2024-12-09 05:19:07.292391] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb43d40) on tqpair(0xadf750): expected_datao=0, payload_size=8 01:24:25.022 [2024-12-09 05:19:07.292394] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.022 [2024-12-09 05:19:07.292399] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:24:25.022 [2024-12-09 05:19:07.292402] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:24:25.022 [2024-12-09 05:19:07.292413] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.022 [2024-12-09 05:19:07.292418] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.022 [2024-12-09 05:19:07.292420] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.022 [2024-12-09 05:19:07.292423] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43d40) on tqpair=0xadf750 01:24:25.022 ===================================================== 01:24:25.022 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 01:24:25.022 ===================================================== 01:24:25.022 Controller Capabilities/Features 01:24:25.022 ================================ 01:24:25.022 Vendor ID: 0000 01:24:25.022 Subsystem Vendor ID: 0000 01:24:25.022 Serial Number: .................... 01:24:25.022 Model Number: ........................................ 01:24:25.022 Firmware Version: 25.01 01:24:25.022 Recommended Arb Burst: 0 01:24:25.022 IEEE OUI Identifier: 00 00 00 01:24:25.022 Multi-path I/O 01:24:25.022 May have multiple subsystem ports: No 01:24:25.022 May have multiple controllers: No 01:24:25.022 Associated with SR-IOV VF: No 01:24:25.022 Max Data Transfer Size: 131072 01:24:25.022 Max Number of Namespaces: 0 01:24:25.022 Max Number of I/O Queues: 1024 01:24:25.022 NVMe Specification Version (VS): 1.3 01:24:25.022 NVMe Specification Version (Identify): 1.3 01:24:25.022 Maximum Queue Entries: 128 01:24:25.022 Contiguous Queues Required: Yes 01:24:25.022 Arbitration Mechanisms Supported 01:24:25.022 Weighted Round Robin: Not Supported 01:24:25.022 Vendor Specific: Not Supported 01:24:25.022 Reset Timeout: 15000 ms 01:24:25.022 Doorbell Stride: 4 bytes 01:24:25.022 NVM Subsystem Reset: Not Supported 01:24:25.022 Command Sets Supported 01:24:25.022 NVM Command Set: Supported 01:24:25.022 Boot Partition: Not Supported 01:24:25.022 Memory Page Size Minimum: 4096 bytes 01:24:25.022 Memory Page Size Maximum: 4096 bytes 01:24:25.022 Persistent Memory Region: Not Supported 01:24:25.022 Optional Asynchronous Events Supported 01:24:25.022 Namespace Attribute Notices: Not Supported 01:24:25.022 Firmware Activation Notices: Not Supported 01:24:25.022 ANA Change Notices: Not Supported 01:24:25.022 PLE Aggregate Log Change Notices: Not Supported 01:24:25.022 LBA Status Info Alert Notices: Not Supported 01:24:25.022 EGE Aggregate Log Change Notices: Not Supported 01:24:25.022 Normal NVM Subsystem Shutdown event: Not Supported 01:24:25.022 Zone Descriptor Change Notices: Not Supported 01:24:25.022 Discovery Log Change Notices: Supported 01:24:25.022 Controller Attributes 01:24:25.022 128-bit Host Identifier: Not Supported 01:24:25.022 Non-Operational Permissive Mode: Not Supported 01:24:25.022 NVM Sets: Not Supported 01:24:25.022 Read Recovery Levels: Not Supported 01:24:25.022 Endurance Groups: Not Supported 01:24:25.022 Predictable Latency Mode: Not Supported 01:24:25.022 Traffic Based Keep ALive: Not Supported 01:24:25.022 Namespace Granularity: Not Supported 01:24:25.022 SQ Associations: Not Supported 01:24:25.022 UUID List: Not Supported 01:24:25.022 Multi-Domain Subsystem: Not Supported 01:24:25.022 Fixed Capacity Management: Not Supported 01:24:25.022 Variable Capacity Management: Not Supported 01:24:25.022 Delete Endurance Group: Not Supported 01:24:25.022 Delete NVM Set: Not Supported 01:24:25.022 Extended LBA Formats Supported: Not Supported 01:24:25.022 Flexible Data Placement Supported: Not Supported 01:24:25.022 01:24:25.022 Controller Memory Buffer Support 01:24:25.022 ================================ 01:24:25.022 Supported: No 01:24:25.022 01:24:25.022 Persistent Memory Region Support 01:24:25.022 ================================ 01:24:25.022 Supported: No 01:24:25.022 01:24:25.022 Admin Command Set Attributes 01:24:25.022 ============================ 01:24:25.022 Security Send/Receive: Not Supported 01:24:25.022 Format NVM: Not Supported 01:24:25.022 Firmware Activate/Download: Not Supported 01:24:25.022 Namespace Management: Not Supported 01:24:25.022 Device Self-Test: Not Supported 01:24:25.022 Directives: Not Supported 01:24:25.022 NVMe-MI: Not Supported 01:24:25.022 Virtualization Management: Not Supported 01:24:25.022 Doorbell Buffer Config: Not Supported 01:24:25.022 Get LBA Status Capability: Not Supported 01:24:25.022 Command & Feature Lockdown Capability: Not Supported 01:24:25.022 Abort Command Limit: 1 01:24:25.022 Async Event Request Limit: 4 01:24:25.022 Number of Firmware Slots: N/A 01:24:25.022 Firmware Slot 1 Read-Only: N/A 01:24:25.022 Firmware Activation Without Reset: N/A 01:24:25.022 Multiple Update Detection Support: N/A 01:24:25.022 Firmware Update Granularity: No Information Provided 01:24:25.022 Per-Namespace SMART Log: No 01:24:25.022 Asymmetric Namespace Access Log Page: Not Supported 01:24:25.022 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 01:24:25.022 Command Effects Log Page: Not Supported 01:24:25.022 Get Log Page Extended Data: Supported 01:24:25.022 Telemetry Log Pages: Not Supported 01:24:25.022 Persistent Event Log Pages: Not Supported 01:24:25.022 Supported Log Pages Log Page: May Support 01:24:25.022 Commands Supported & Effects Log Page: Not Supported 01:24:25.022 Feature Identifiers & Effects Log Page:May Support 01:24:25.022 NVMe-MI Commands & Effects Log Page: May Support 01:24:25.022 Data Area 4 for Telemetry Log: Not Supported 01:24:25.022 Error Log Page Entries Supported: 128 01:24:25.022 Keep Alive: Not Supported 01:24:25.022 01:24:25.022 NVM Command Set Attributes 01:24:25.023 ========================== 01:24:25.023 Submission Queue Entry Size 01:24:25.023 Max: 1 01:24:25.023 Min: 1 01:24:25.023 Completion Queue Entry Size 01:24:25.023 Max: 1 01:24:25.023 Min: 1 01:24:25.023 Number of Namespaces: 0 01:24:25.023 Compare Command: Not Supported 01:24:25.023 Write Uncorrectable Command: Not Supported 01:24:25.023 Dataset Management Command: Not Supported 01:24:25.023 Write Zeroes Command: Not Supported 01:24:25.023 Set Features Save Field: Not Supported 01:24:25.023 Reservations: Not Supported 01:24:25.023 Timestamp: Not Supported 01:24:25.023 Copy: Not Supported 01:24:25.023 Volatile Write Cache: Not Present 01:24:25.023 Atomic Write Unit (Normal): 1 01:24:25.023 Atomic Write Unit (PFail): 1 01:24:25.023 Atomic Compare & Write Unit: 1 01:24:25.023 Fused Compare & Write: Supported 01:24:25.023 Scatter-Gather List 01:24:25.023 SGL Command Set: Supported 01:24:25.023 SGL Keyed: Supported 01:24:25.023 SGL Bit Bucket Descriptor: Not Supported 01:24:25.023 SGL Metadata Pointer: Not Supported 01:24:25.023 Oversized SGL: Not Supported 01:24:25.023 SGL Metadata Address: Not Supported 01:24:25.023 SGL Offset: Supported 01:24:25.023 Transport SGL Data Block: Not Supported 01:24:25.023 Replay Protected Memory Block: Not Supported 01:24:25.023 01:24:25.023 Firmware Slot Information 01:24:25.023 ========================= 01:24:25.023 Active slot: 0 01:24:25.023 01:24:25.023 01:24:25.023 Error Log 01:24:25.023 ========= 01:24:25.023 01:24:25.023 Active Namespaces 01:24:25.023 ================= 01:24:25.023 Discovery Log Page 01:24:25.023 ================== 01:24:25.023 Generation Counter: 2 01:24:25.023 Number of Records: 2 01:24:25.023 Record Format: 0 01:24:25.023 01:24:25.023 Discovery Log Entry 0 01:24:25.023 ---------------------- 01:24:25.023 Transport Type: 3 (TCP) 01:24:25.023 Address Family: 1 (IPv4) 01:24:25.023 Subsystem Type: 3 (Current Discovery Subsystem) 01:24:25.023 Entry Flags: 01:24:25.023 Duplicate Returned Information: 1 01:24:25.023 Explicit Persistent Connection Support for Discovery: 1 01:24:25.023 Transport Requirements: 01:24:25.023 Secure Channel: Not Required 01:24:25.023 Port ID: 0 (0x0000) 01:24:25.023 Controller ID: 65535 (0xffff) 01:24:25.023 Admin Max SQ Size: 128 01:24:25.023 Transport Service Identifier: 4420 01:24:25.023 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 01:24:25.023 Transport Address: 10.0.0.3 01:24:25.023 Discovery Log Entry 1 01:24:25.023 ---------------------- 01:24:25.023 Transport Type: 3 (TCP) 01:24:25.023 Address Family: 1 (IPv4) 01:24:25.023 Subsystem Type: 2 (NVM Subsystem) 01:24:25.023 Entry Flags: 01:24:25.023 Duplicate Returned Information: 0 01:24:25.023 Explicit Persistent Connection Support for Discovery: 0 01:24:25.023 Transport Requirements: 01:24:25.023 Secure Channel: Not Required 01:24:25.023 Port ID: 0 (0x0000) 01:24:25.023 Controller ID: 65535 (0xffff) 01:24:25.023 Admin Max SQ Size: 128 01:24:25.023 Transport Service Identifier: 4420 01:24:25.023 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 01:24:25.023 Transport Address: 10.0.0.3 [2024-12-09 05:19:07.292504] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 01:24:25.023 [2024-12-09 05:19:07.292521] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43740) on tqpair=0xadf750 01:24:25.023 [2024-12-09 05:19:07.292527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:24:25.023 [2024-12-09 05:19:07.292531] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb438c0) on tqpair=0xadf750 01:24:25.023 [2024-12-09 05:19:07.292540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:24:25.023 [2024-12-09 05:19:07.292545] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43a40) on tqpair=0xadf750 01:24:25.023 [2024-12-09 05:19:07.292548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:24:25.023 [2024-12-09 05:19:07.292552] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43bc0) on tqpair=0xadf750 01:24:25.023 [2024-12-09 05:19:07.292556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:24:25.023 [2024-12-09 05:19:07.292565] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.023 [2024-12-09 05:19:07.292568] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.023 [2024-12-09 05:19:07.292571] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadf750) 01:24:25.023 [2024-12-09 05:19:07.292577] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.023 [2024-12-09 05:19:07.292591] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43bc0, cid 3, qid 0 01:24:25.023 [2024-12-09 05:19:07.292636] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.023 [2024-12-09 05:19:07.292642] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.023 [2024-12-09 05:19:07.292644] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.023 [2024-12-09 05:19:07.292647] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43bc0) on tqpair=0xadf750 01:24:25.023 [2024-12-09 05:19:07.292653] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.023 [2024-12-09 05:19:07.292656] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.023 [2024-12-09 05:19:07.292658] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadf750) 01:24:25.023 [2024-12-09 05:19:07.292663] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.023 [2024-12-09 05:19:07.292677] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43bc0, cid 3, qid 0 01:24:25.023 [2024-12-09 05:19:07.292733] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.023 [2024-12-09 05:19:07.292738] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.023 [2024-12-09 05:19:07.292741] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.023 [2024-12-09 05:19:07.292744] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43bc0) on tqpair=0xadf750 01:24:25.023 [2024-12-09 05:19:07.292756] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 01:24:25.023 [2024-12-09 05:19:07.292760] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 01:24:25.023 [2024-12-09 05:19:07.292767] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.023 [2024-12-09 05:19:07.292770] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.023 [2024-12-09 05:19:07.292773] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadf750) 01:24:25.023 [2024-12-09 05:19:07.292779] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.023 [2024-12-09 05:19:07.292798] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43bc0, cid 3, qid 0 01:24:25.023 [2024-12-09 05:19:07.292830] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.023 [2024-12-09 05:19:07.292835] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.023 [2024-12-09 05:19:07.292837] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.023 [2024-12-09 05:19:07.292840] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43bc0) on tqpair=0xadf750 01:24:25.023 [2024-12-09 05:19:07.292855] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.023 [2024-12-09 05:19:07.292858] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.023 [2024-12-09 05:19:07.292861] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadf750) 01:24:25.023 [2024-12-09 05:19:07.292866] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.023 [2024-12-09 05:19:07.292883] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43bc0, cid 3, qid 0 01:24:25.023 [2024-12-09 05:19:07.292931] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.023 [2024-12-09 05:19:07.292937] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.023 [2024-12-09 05:19:07.292939] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.023 [2024-12-09 05:19:07.292942] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43bc0) on tqpair=0xadf750 01:24:25.023 [2024-12-09 05:19:07.292950] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.023 [2024-12-09 05:19:07.292953] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.023 [2024-12-09 05:19:07.292955] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadf750) 01:24:25.023 [2024-12-09 05:19:07.292968] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.023 [2024-12-09 05:19:07.292979] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43bc0, cid 3, qid 0 01:24:25.023 [2024-12-09 05:19:07.293018] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.023 [2024-12-09 05:19:07.293023] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.023 [2024-12-09 05:19:07.293026] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.023 [2024-12-09 05:19:07.293035] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43bc0) on tqpair=0xadf750 01:24:25.023 [2024-12-09 05:19:07.293043] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.023 [2024-12-09 05:19:07.293046] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.023 [2024-12-09 05:19:07.293049] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadf750) 01:24:25.023 [2024-12-09 05:19:07.293054] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.023 [2024-12-09 05:19:07.293072] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43bc0, cid 3, qid 0 01:24:25.024 [2024-12-09 05:19:07.293113] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.024 [2024-12-09 05:19:07.293119] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.024 [2024-12-09 05:19:07.293121] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.024 [2024-12-09 05:19:07.293124] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43bc0) on tqpair=0xadf750 01:24:25.024 [2024-12-09 05:19:07.293138] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.024 [2024-12-09 05:19:07.293142] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.024 [2024-12-09 05:19:07.293144] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadf750) 01:24:25.024 [2024-12-09 05:19:07.293150] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.024 [2024-12-09 05:19:07.293162] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43bc0, cid 3, qid 0 01:24:25.024 [2024-12-09 05:19:07.293208] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.024 [2024-12-09 05:19:07.293213] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.024 [2024-12-09 05:19:07.293216] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.024 [2024-12-09 05:19:07.293225] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43bc0) on tqpair=0xadf750 01:24:25.024 [2024-12-09 05:19:07.293233] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.024 [2024-12-09 05:19:07.293236] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.024 [2024-12-09 05:19:07.293239] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadf750) 01:24:25.024 [2024-12-09 05:19:07.293245] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.024 [2024-12-09 05:19:07.293263] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43bc0, cid 3, qid 0 01:24:25.024 [2024-12-09 05:19:07.293298] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.024 [2024-12-09 05:19:07.293304] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.024 [2024-12-09 05:19:07.293306] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.024 [2024-12-09 05:19:07.293309] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43bc0) on tqpair=0xadf750 01:24:25.024 [2024-12-09 05:19:07.293316] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.024 [2024-12-09 05:19:07.293319] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.024 [2024-12-09 05:19:07.293343] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadf750) 01:24:25.024 [2024-12-09 05:19:07.293349] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.024 [2024-12-09 05:19:07.293362] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43bc0, cid 3, qid 0 01:24:25.024 [2024-12-09 05:19:07.293405] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.024 [2024-12-09 05:19:07.293410] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.024 [2024-12-09 05:19:07.293413] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.024 [2024-12-09 05:19:07.293416] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43bc0) on tqpair=0xadf750 01:24:25.024 [2024-12-09 05:19:07.293423] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.024 [2024-12-09 05:19:07.293426] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.024 [2024-12-09 05:19:07.293429] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadf750) 01:24:25.024 [2024-12-09 05:19:07.293443] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.024 [2024-12-09 05:19:07.293455] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43bc0, cid 3, qid 0 01:24:25.024 [2024-12-09 05:19:07.293507] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.024 [2024-12-09 05:19:07.293513] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.024 [2024-12-09 05:19:07.293516] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.024 [2024-12-09 05:19:07.293519] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43bc0) on tqpair=0xadf750 01:24:25.024 [2024-12-09 05:19:07.293526] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.024 [2024-12-09 05:19:07.293529] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.024 [2024-12-09 05:19:07.293532] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadf750) 01:24:25.024 [2024-12-09 05:19:07.293537] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.024 [2024-12-09 05:19:07.293555] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43bc0, cid 3, qid 0 01:24:25.024 [2024-12-09 05:19:07.293589] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.024 [2024-12-09 05:19:07.293595] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.024 [2024-12-09 05:19:07.293597] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.024 [2024-12-09 05:19:07.293600] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43bc0) on tqpair=0xadf750 01:24:25.024 [2024-12-09 05:19:07.293608] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.024 [2024-12-09 05:19:07.293611] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.024 [2024-12-09 05:19:07.293613] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadf750) 01:24:25.024 [2024-12-09 05:19:07.293619] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.024 [2024-12-09 05:19:07.293638] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43bc0, cid 3, qid 0 01:24:25.024 [2024-12-09 05:19:07.293679] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.024 [2024-12-09 05:19:07.293684] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.024 [2024-12-09 05:19:07.293687] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.024 [2024-12-09 05:19:07.293696] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43bc0) on tqpair=0xadf750 01:24:25.024 [2024-12-09 05:19:07.293704] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.024 [2024-12-09 05:19:07.293707] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.024 [2024-12-09 05:19:07.293710] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadf750) 01:24:25.024 [2024-12-09 05:19:07.293715] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.024 [2024-12-09 05:19:07.293733] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43bc0, cid 3, qid 0 01:24:25.024 [2024-12-09 05:19:07.293776] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.024 [2024-12-09 05:19:07.293781] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.024 [2024-12-09 05:19:07.293784] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.024 [2024-12-09 05:19:07.293787] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43bc0) on tqpair=0xadf750 01:24:25.024 [2024-12-09 05:19:07.293802] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.024 [2024-12-09 05:19:07.293805] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.024 [2024-12-09 05:19:07.293807] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadf750) 01:24:25.024 [2024-12-09 05:19:07.293813] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.024 [2024-12-09 05:19:07.293831] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43bc0, cid 3, qid 0 01:24:25.024 [2024-12-09 05:19:07.293869] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.024 [2024-12-09 05:19:07.293875] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.024 [2024-12-09 05:19:07.293877] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.024 [2024-12-09 05:19:07.293880] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43bc0) on tqpair=0xadf750 01:24:25.024 [2024-12-09 05:19:07.293887] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.024 [2024-12-09 05:19:07.293890] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.024 [2024-12-09 05:19:07.293893] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadf750) 01:24:25.024 [2024-12-09 05:19:07.293898] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.024 [2024-12-09 05:19:07.293916] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43bc0, cid 3, qid 0 01:24:25.024 [2024-12-09 05:19:07.293957] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.024 [2024-12-09 05:19:07.293962] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.024 [2024-12-09 05:19:07.293965] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.024 [2024-12-09 05:19:07.293968] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43bc0) on tqpair=0xadf750 01:24:25.024 [2024-12-09 05:19:07.293975] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.024 [2024-12-09 05:19:07.293978] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.024 [2024-12-09 05:19:07.293981] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadf750) 01:24:25.024 [2024-12-09 05:19:07.293987] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.024 [2024-12-09 05:19:07.294004] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43bc0, cid 3, qid 0 01:24:25.024 [2024-12-09 05:19:07.294048] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.025 [2024-12-09 05:19:07.294054] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.025 [2024-12-09 05:19:07.294056] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.025 [2024-12-09 05:19:07.294059] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43bc0) on tqpair=0xadf750 01:24:25.025 [2024-12-09 05:19:07.294074] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.025 [2024-12-09 05:19:07.294077] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.025 [2024-12-09 05:19:07.294080] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadf750) 01:24:25.025 [2024-12-09 05:19:07.294085] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.025 [2024-12-09 05:19:07.294104] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43bc0, cid 3, qid 0 01:24:25.025 [2024-12-09 05:19:07.294141] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.025 [2024-12-09 05:19:07.294147] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.025 [2024-12-09 05:19:07.294149] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.025 [2024-12-09 05:19:07.294152] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43bc0) on tqpair=0xadf750 01:24:25.025 [2024-12-09 05:19:07.294159] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.025 [2024-12-09 05:19:07.294163] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.025 [2024-12-09 05:19:07.294172] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadf750) 01:24:25.025 [2024-12-09 05:19:07.294178] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.025 [2024-12-09 05:19:07.294189] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43bc0, cid 3, qid 0 01:24:25.025 [2024-12-09 05:19:07.294228] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.025 [2024-12-09 05:19:07.294234] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.025 [2024-12-09 05:19:07.294236] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.025 [2024-12-09 05:19:07.294239] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43bc0) on tqpair=0xadf750 01:24:25.025 [2024-12-09 05:19:07.294252] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.025 [2024-12-09 05:19:07.294255] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.025 [2024-12-09 05:19:07.294258] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadf750) 01:24:25.025 [2024-12-09 05:19:07.294263] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.025 [2024-12-09 05:19:07.294274] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43bc0, cid 3, qid 0 01:24:25.025 [2024-12-09 05:19:07.294318] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.025 [2024-12-09 05:19:07.294333] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.025 [2024-12-09 05:19:07.294336] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.025 [2024-12-09 05:19:07.294339] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43bc0) on tqpair=0xadf750 01:24:25.025 [2024-12-09 05:19:07.294352] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.025 [2024-12-09 05:19:07.294355] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.025 [2024-12-09 05:19:07.294358] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadf750) 01:24:25.025 [2024-12-09 05:19:07.294363] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.025 [2024-12-09 05:19:07.294377] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43bc0, cid 3, qid 0 01:24:25.025 [2024-12-09 05:19:07.294416] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.025 [2024-12-09 05:19:07.294422] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.025 [2024-12-09 05:19:07.294424] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.025 [2024-12-09 05:19:07.294427] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43bc0) on tqpair=0xadf750 01:24:25.025 [2024-12-09 05:19:07.294435] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.025 [2024-12-09 05:19:07.294438] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.025 [2024-12-09 05:19:07.294440] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadf750) 01:24:25.025 [2024-12-09 05:19:07.294453] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.025 [2024-12-09 05:19:07.294465] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43bc0, cid 3, qid 0 01:24:25.025 [2024-12-09 05:19:07.294505] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.025 [2024-12-09 05:19:07.294510] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.025 [2024-12-09 05:19:07.294513] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.025 [2024-12-09 05:19:07.294516] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43bc0) on tqpair=0xadf750 01:24:25.025 [2024-12-09 05:19:07.294530] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.025 [2024-12-09 05:19:07.294533] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.025 [2024-12-09 05:19:07.294536] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadf750) 01:24:25.025 [2024-12-09 05:19:07.294541] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.025 [2024-12-09 05:19:07.294559] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43bc0, cid 3, qid 0 01:24:25.025 [2024-12-09 05:19:07.294599] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.025 [2024-12-09 05:19:07.294604] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.025 [2024-12-09 05:19:07.294607] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.025 [2024-12-09 05:19:07.294610] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43bc0) on tqpair=0xadf750 01:24:25.025 [2024-12-09 05:19:07.294617] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.025 [2024-12-09 05:19:07.294620] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.025 [2024-12-09 05:19:07.294622] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadf750) 01:24:25.025 [2024-12-09 05:19:07.294628] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.025 [2024-12-09 05:19:07.294650] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43bc0, cid 3, qid 0 01:24:25.025 [2024-12-09 05:19:07.294706] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.025 [2024-12-09 05:19:07.294712] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.025 [2024-12-09 05:19:07.294714] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.025 [2024-12-09 05:19:07.294717] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43bc0) on tqpair=0xadf750 01:24:25.025 [2024-12-09 05:19:07.294724] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.025 [2024-12-09 05:19:07.294727] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.025 [2024-12-09 05:19:07.294730] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadf750) 01:24:25.025 [2024-12-09 05:19:07.294735] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.025 [2024-12-09 05:19:07.294746] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43bc0, cid 3, qid 0 01:24:25.025 [2024-12-09 05:19:07.294782] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.025 [2024-12-09 05:19:07.294787] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.025 [2024-12-09 05:19:07.294790] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.025 [2024-12-09 05:19:07.294792] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43bc0) on tqpair=0xadf750 01:24:25.025 [2024-12-09 05:19:07.294806] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.025 [2024-12-09 05:19:07.294809] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.025 [2024-12-09 05:19:07.294812] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadf750) 01:24:25.025 [2024-12-09 05:19:07.294817] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.025 [2024-12-09 05:19:07.294828] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43bc0, cid 3, qid 0 01:24:25.025 [2024-12-09 05:19:07.294866] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.025 [2024-12-09 05:19:07.294871] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.025 [2024-12-09 05:19:07.294879] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.025 [2024-12-09 05:19:07.294882] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43bc0) on tqpair=0xadf750 01:24:25.025 [2024-12-09 05:19:07.294890] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.025 [2024-12-09 05:19:07.294893] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.025 [2024-12-09 05:19:07.294896] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadf750) 01:24:25.025 [2024-12-09 05:19:07.294902] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.025 [2024-12-09 05:19:07.294913] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43bc0, cid 3, qid 0 01:24:25.025 [2024-12-09 05:19:07.294959] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.025 [2024-12-09 05:19:07.294964] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.025 [2024-12-09 05:19:07.294966] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.025 [2024-12-09 05:19:07.294969] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43bc0) on tqpair=0xadf750 01:24:25.025 [2024-12-09 05:19:07.294976] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.025 [2024-12-09 05:19:07.294979] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.025 [2024-12-09 05:19:07.294982] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadf750) 01:24:25.025 [2024-12-09 05:19:07.294988] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.025 [2024-12-09 05:19:07.295007] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43bc0, cid 3, qid 0 01:24:25.025 [2024-12-09 05:19:07.295045] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.025 [2024-12-09 05:19:07.295050] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.025 [2024-12-09 05:19:07.295053] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.025 [2024-12-09 05:19:07.295056] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43bc0) on tqpair=0xadf750 01:24:25.026 [2024-12-09 05:19:07.295080] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.026 [2024-12-09 05:19:07.295084] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.026 [2024-12-09 05:19:07.295087] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadf750) 01:24:25.026 [2024-12-09 05:19:07.295092] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.026 [2024-12-09 05:19:07.295104] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43bc0, cid 3, qid 0 01:24:25.026 [2024-12-09 05:19:07.295142] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.026 [2024-12-09 05:19:07.295147] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.026 [2024-12-09 05:19:07.295149] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.026 [2024-12-09 05:19:07.295152] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43bc0) on tqpair=0xadf750 01:24:25.026 [2024-12-09 05:19:07.295159] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.026 [2024-12-09 05:19:07.295163] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.026 [2024-12-09 05:19:07.295165] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadf750) 01:24:25.026 [2024-12-09 05:19:07.295179] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.026 [2024-12-09 05:19:07.295191] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43bc0, cid 3, qid 0 01:24:25.026 [2024-12-09 05:19:07.295240] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.026 [2024-12-09 05:19:07.295245] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.026 [2024-12-09 05:19:07.295248] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.026 [2024-12-09 05:19:07.295250] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43bc0) on tqpair=0xadf750 01:24:25.026 [2024-12-09 05:19:07.295258] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.026 [2024-12-09 05:19:07.295261] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.026 [2024-12-09 05:19:07.295264] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadf750) 01:24:25.026 [2024-12-09 05:19:07.295269] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.026 [2024-12-09 05:19:07.295280] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43bc0, cid 3, qid 0 01:24:25.026 [2024-12-09 05:19:07.295334] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.026 [2024-12-09 05:19:07.295340] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.026 [2024-12-09 05:19:07.295342] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.026 [2024-12-09 05:19:07.295345] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43bc0) on tqpair=0xadf750 01:24:25.026 [2024-12-09 05:19:07.295352] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.026 [2024-12-09 05:19:07.295356] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.026 [2024-12-09 05:19:07.295358] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadf750) 01:24:25.026 [2024-12-09 05:19:07.295364] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.026 [2024-12-09 05:19:07.295376] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43bc0, cid 3, qid 0 01:24:25.026 [2024-12-09 05:19:07.295422] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.026 [2024-12-09 05:19:07.295427] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.026 [2024-12-09 05:19:07.295430] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.026 [2024-12-09 05:19:07.295433] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43bc0) on tqpair=0xadf750 01:24:25.026 [2024-12-09 05:19:07.295440] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.026 [2024-12-09 05:19:07.295443] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.026 [2024-12-09 05:19:07.295446] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadf750) 01:24:25.026 [2024-12-09 05:19:07.295451] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.026 [2024-12-09 05:19:07.295462] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43bc0, cid 3, qid 0 01:24:25.026 [2024-12-09 05:19:07.295505] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.026 [2024-12-09 05:19:07.295511] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.026 [2024-12-09 05:19:07.295514] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.026 [2024-12-09 05:19:07.295517] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43bc0) on tqpair=0xadf750 01:24:25.026 [2024-12-09 05:19:07.295524] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.026 [2024-12-09 05:19:07.295527] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.026 [2024-12-09 05:19:07.295529] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadf750) 01:24:25.026 [2024-12-09 05:19:07.295535] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.026 [2024-12-09 05:19:07.295546] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43bc0, cid 3, qid 0 01:24:25.026 [2024-12-09 05:19:07.295593] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.026 [2024-12-09 05:19:07.295598] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.026 [2024-12-09 05:19:07.295601] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.026 [2024-12-09 05:19:07.295603] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43bc0) on tqpair=0xadf750 01:24:25.026 [2024-12-09 05:19:07.295611] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.026 [2024-12-09 05:19:07.295614] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.026 [2024-12-09 05:19:07.295617] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadf750) 01:24:25.026 [2024-12-09 05:19:07.295622] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.026 [2024-12-09 05:19:07.295640] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43bc0, cid 3, qid 0 01:24:25.026 [2024-12-09 05:19:07.295679] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.026 [2024-12-09 05:19:07.295684] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.026 [2024-12-09 05:19:07.295686] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.026 [2024-12-09 05:19:07.295689] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43bc0) on tqpair=0xadf750 01:24:25.026 [2024-12-09 05:19:07.295704] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.026 [2024-12-09 05:19:07.295708] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.026 [2024-12-09 05:19:07.295711] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadf750) 01:24:25.026 [2024-12-09 05:19:07.295716] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.026 [2024-12-09 05:19:07.295727] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43bc0, cid 3, qid 0 01:24:25.026 [2024-12-09 05:19:07.295775] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.026 [2024-12-09 05:19:07.295780] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.026 [2024-12-09 05:19:07.295783] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.026 [2024-12-09 05:19:07.295786] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43bc0) on tqpair=0xadf750 01:24:25.026 [2024-12-09 05:19:07.295793] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.026 [2024-12-09 05:19:07.295796] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.026 [2024-12-09 05:19:07.295799] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadf750) 01:24:25.026 [2024-12-09 05:19:07.295805] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.026 [2024-12-09 05:19:07.295816] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43bc0, cid 3, qid 0 01:24:25.026 [2024-12-09 05:19:07.295860] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.026 [2024-12-09 05:19:07.295865] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.026 [2024-12-09 05:19:07.295868] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.026 [2024-12-09 05:19:07.295871] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43bc0) on tqpair=0xadf750 01:24:25.026 [2024-12-09 05:19:07.295878] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.026 [2024-12-09 05:19:07.295881] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.026 [2024-12-09 05:19:07.295883] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadf750) 01:24:25.026 [2024-12-09 05:19:07.295889] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.026 [2024-12-09 05:19:07.295900] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43bc0, cid 3, qid 0 01:24:25.026 [2024-12-09 05:19:07.295942] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.026 [2024-12-09 05:19:07.295947] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.026 [2024-12-09 05:19:07.295950] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.026 [2024-12-09 05:19:07.295953] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43bc0) on tqpair=0xadf750 01:24:25.026 [2024-12-09 05:19:07.295960] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.026 [2024-12-09 05:19:07.295963] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.026 [2024-12-09 05:19:07.295966] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadf750) 01:24:25.026 [2024-12-09 05:19:07.295971] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.026 [2024-12-09 05:19:07.295982] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43bc0, cid 3, qid 0 01:24:25.026 [2024-12-09 05:19:07.296033] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.026 [2024-12-09 05:19:07.296038] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.026 [2024-12-09 05:19:07.296041] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.026 [2024-12-09 05:19:07.296044] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43bc0) on tqpair=0xadf750 01:24:25.026 [2024-12-09 05:19:07.296051] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.026 [2024-12-09 05:19:07.296054] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.026 [2024-12-09 05:19:07.296057] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadf750) 01:24:25.026 [2024-12-09 05:19:07.296062] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.026 [2024-12-09 05:19:07.296074] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43bc0, cid 3, qid 0 01:24:25.026 [2024-12-09 05:19:07.296113] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.026 [2024-12-09 05:19:07.296118] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.027 [2024-12-09 05:19:07.296121] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.027 [2024-12-09 05:19:07.296123] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43bc0) on tqpair=0xadf750 01:24:25.027 [2024-12-09 05:19:07.296131] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.027 [2024-12-09 05:19:07.296134] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.027 [2024-12-09 05:19:07.296137] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadf750) 01:24:25.027 [2024-12-09 05:19:07.296142] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.027 [2024-12-09 05:19:07.296161] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43bc0, cid 3, qid 0 01:24:25.027 [2024-12-09 05:19:07.296202] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.027 [2024-12-09 05:19:07.296207] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.027 [2024-12-09 05:19:07.296209] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.027 [2024-12-09 05:19:07.296212] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43bc0) on tqpair=0xadf750 01:24:25.027 [2024-12-09 05:19:07.296227] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.027 [2024-12-09 05:19:07.296231] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.027 [2024-12-09 05:19:07.296234] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadf750) 01:24:25.027 [2024-12-09 05:19:07.296239] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.027 [2024-12-09 05:19:07.296250] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43bc0, cid 3, qid 0 01:24:25.027 [2024-12-09 05:19:07.296291] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.027 [2024-12-09 05:19:07.296296] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.027 [2024-12-09 05:19:07.296307] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.027 [2024-12-09 05:19:07.296310] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43bc0) on tqpair=0xadf750 01:24:25.027 [2024-12-09 05:19:07.296318] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.027 [2024-12-09 05:19:07.296321] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.027 [2024-12-09 05:19:07.296331] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadf750) 01:24:25.027 [2024-12-09 05:19:07.296337] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.027 [2024-12-09 05:19:07.296349] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43bc0, cid 3, qid 0 01:24:25.027 [2024-12-09 05:19:07.296399] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.027 [2024-12-09 05:19:07.296405] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.027 [2024-12-09 05:19:07.296407] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.027 [2024-12-09 05:19:07.296410] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43bc0) on tqpair=0xadf750 01:24:25.027 [2024-12-09 05:19:07.296418] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.027 [2024-12-09 05:19:07.296421] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.027 [2024-12-09 05:19:07.296423] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadf750) 01:24:25.027 [2024-12-09 05:19:07.296428] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.027 [2024-12-09 05:19:07.296439] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43bc0, cid 3, qid 0 01:24:25.027 [2024-12-09 05:19:07.296483] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.027 [2024-12-09 05:19:07.296488] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.027 [2024-12-09 05:19:07.296491] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.027 [2024-12-09 05:19:07.296493] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43bc0) on tqpair=0xadf750 01:24:25.027 [2024-12-09 05:19:07.296501] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.027 [2024-12-09 05:19:07.296504] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.027 [2024-12-09 05:19:07.296506] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadf750) 01:24:25.027 [2024-12-09 05:19:07.296512] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.027 [2024-12-09 05:19:07.296524] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43bc0, cid 3, qid 0 01:24:25.027 [2024-12-09 05:19:07.296564] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.027 [2024-12-09 05:19:07.296570] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.027 [2024-12-09 05:19:07.296572] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.027 [2024-12-09 05:19:07.296575] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43bc0) on tqpair=0xadf750 01:24:25.027 [2024-12-09 05:19:07.296582] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.027 [2024-12-09 05:19:07.296586] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.027 [2024-12-09 05:19:07.296588] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadf750) 01:24:25.027 [2024-12-09 05:19:07.296594] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.027 [2024-12-09 05:19:07.296612] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43bc0, cid 3, qid 0 01:24:25.027 [2024-12-09 05:19:07.296660] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.027 [2024-12-09 05:19:07.296665] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.027 [2024-12-09 05:19:07.296668] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.027 [2024-12-09 05:19:07.296670] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43bc0) on tqpair=0xadf750 01:24:25.027 [2024-12-09 05:19:07.296686] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.027 [2024-12-09 05:19:07.296690] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.027 [2024-12-09 05:19:07.296692] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadf750) 01:24:25.027 [2024-12-09 05:19:07.296697] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.027 [2024-12-09 05:19:07.296709] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43bc0, cid 3, qid 0 01:24:25.027 [2024-12-09 05:19:07.296761] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.027 [2024-12-09 05:19:07.296767] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.027 [2024-12-09 05:19:07.296769] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.027 [2024-12-09 05:19:07.296772] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43bc0) on tqpair=0xadf750 01:24:25.027 [2024-12-09 05:19:07.296780] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.027 [2024-12-09 05:19:07.296783] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.027 [2024-12-09 05:19:07.296785] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadf750) 01:24:25.027 [2024-12-09 05:19:07.296791] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.027 [2024-12-09 05:19:07.296802] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43bc0, cid 3, qid 0 01:24:25.027 [2024-12-09 05:19:07.296845] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.027 [2024-12-09 05:19:07.296851] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.027 [2024-12-09 05:19:07.296853] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.027 [2024-12-09 05:19:07.296856] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43bc0) on tqpair=0xadf750 01:24:25.027 [2024-12-09 05:19:07.296863] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.027 [2024-12-09 05:19:07.296866] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.027 [2024-12-09 05:19:07.296869] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadf750) 01:24:25.027 [2024-12-09 05:19:07.296874] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.027 [2024-12-09 05:19:07.296885] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43bc0, cid 3, qid 0 01:24:25.027 [2024-12-09 05:19:07.296927] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.027 [2024-12-09 05:19:07.296932] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.027 [2024-12-09 05:19:07.296935] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.027 [2024-12-09 05:19:07.296937] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43bc0) on tqpair=0xadf750 01:24:25.027 [2024-12-09 05:19:07.296945] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.027 [2024-12-09 05:19:07.296948] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.027 [2024-12-09 05:19:07.296950] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadf750) 01:24:25.027 [2024-12-09 05:19:07.296956] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.027 [2024-12-09 05:19:07.296967] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43bc0, cid 3, qid 0 01:24:25.027 [2024-12-09 05:19:07.297006] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.027 [2024-12-09 05:19:07.297011] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.027 [2024-12-09 05:19:07.297014] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.027 [2024-12-09 05:19:07.297017] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43bc0) on tqpair=0xadf750 01:24:25.027 [2024-12-09 05:19:07.297024] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.027 [2024-12-09 05:19:07.297027] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.027 [2024-12-09 05:19:07.297030] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadf750) 01:24:25.027 [2024-12-09 05:19:07.297035] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.027 [2024-12-09 05:19:07.297054] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43bc0, cid 3, qid 0 01:24:25.027 [2024-12-09 05:19:07.297087] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.027 [2024-12-09 05:19:07.297093] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.027 [2024-12-09 05:19:07.297095] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.027 [2024-12-09 05:19:07.297098] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43bc0) on tqpair=0xadf750 01:24:25.027 [2024-12-09 05:19:07.297105] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.027 [2024-12-09 05:19:07.297108] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.027 [2024-12-09 05:19:07.297111] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadf750) 01:24:25.027 [2024-12-09 05:19:07.297124] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.027 [2024-12-09 05:19:07.297136] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43bc0, cid 3, qid 0 01:24:25.028 [2024-12-09 05:19:07.297176] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.028 [2024-12-09 05:19:07.297181] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.028 [2024-12-09 05:19:07.297192] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.028 [2024-12-09 05:19:07.297196] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43bc0) on tqpair=0xadf750 01:24:25.028 [2024-12-09 05:19:07.297203] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.028 [2024-12-09 05:19:07.297206] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.028 [2024-12-09 05:19:07.297209] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadf750) 01:24:25.028 [2024-12-09 05:19:07.297215] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.028 [2024-12-09 05:19:07.297226] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43bc0, cid 3, qid 0 01:24:25.028 [2024-12-09 05:19:07.297264] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.028 [2024-12-09 05:19:07.297270] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.028 [2024-12-09 05:19:07.297272] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.028 [2024-12-09 05:19:07.297275] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43bc0) on tqpair=0xadf750 01:24:25.028 [2024-12-09 05:19:07.297283] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.028 [2024-12-09 05:19:07.297286] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.028 [2024-12-09 05:19:07.297288] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadf750) 01:24:25.028 [2024-12-09 05:19:07.297294] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.028 [2024-12-09 05:19:07.297305] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43bc0, cid 3, qid 0 01:24:25.028 [2024-12-09 05:19:07.297361] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.028 [2024-12-09 05:19:07.297366] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.028 [2024-12-09 05:19:07.297369] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.028 [2024-12-09 05:19:07.297372] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43bc0) on tqpair=0xadf750 01:24:25.028 [2024-12-09 05:19:07.297379] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.028 [2024-12-09 05:19:07.297382] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.028 [2024-12-09 05:19:07.297385] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadf750) 01:24:25.028 [2024-12-09 05:19:07.297390] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.028 [2024-12-09 05:19:07.297402] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43bc0, cid 3, qid 0 01:24:25.028 [2024-12-09 05:19:07.297455] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.028 [2024-12-09 05:19:07.297461] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.028 [2024-12-09 05:19:07.297463] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.028 [2024-12-09 05:19:07.297466] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43bc0) on tqpair=0xadf750 01:24:25.028 [2024-12-09 05:19:07.297474] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.028 [2024-12-09 05:19:07.297477] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.028 [2024-12-09 05:19:07.297479] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadf750) 01:24:25.028 [2024-12-09 05:19:07.297485] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.028 [2024-12-09 05:19:07.297495] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43bc0, cid 3, qid 0 01:24:25.028 [2024-12-09 05:19:07.297544] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.028 [2024-12-09 05:19:07.297549] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.028 [2024-12-09 05:19:07.297552] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.028 [2024-12-09 05:19:07.297554] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43bc0) on tqpair=0xadf750 01:24:25.028 [2024-12-09 05:19:07.297562] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.028 [2024-12-09 05:19:07.297565] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.028 [2024-12-09 05:19:07.297568] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadf750) 01:24:25.028 [2024-12-09 05:19:07.297573] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.028 [2024-12-09 05:19:07.297584] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43bc0, cid 3, qid 0 01:24:25.028 [2024-12-09 05:19:07.297626] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.028 [2024-12-09 05:19:07.297631] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.028 [2024-12-09 05:19:07.297634] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.028 [2024-12-09 05:19:07.297636] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43bc0) on tqpair=0xadf750 01:24:25.028 [2024-12-09 05:19:07.297644] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.028 [2024-12-09 05:19:07.297647] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.028 [2024-12-09 05:19:07.297649] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadf750) 01:24:25.028 [2024-12-09 05:19:07.297655] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.028 [2024-12-09 05:19:07.297666] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43bc0, cid 3, qid 0 01:24:25.028 [2024-12-09 05:19:07.297720] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.028 [2024-12-09 05:19:07.297725] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.028 [2024-12-09 05:19:07.297727] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.028 [2024-12-09 05:19:07.297730] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43bc0) on tqpair=0xadf750 01:24:25.028 [2024-12-09 05:19:07.297736] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.028 [2024-12-09 05:19:07.297739] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.028 [2024-12-09 05:19:07.297742] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadf750) 01:24:25.028 [2024-12-09 05:19:07.297746] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.028 [2024-12-09 05:19:07.297756] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43bc0, cid 3, qid 0 01:24:25.028 [2024-12-09 05:19:07.297791] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.028 [2024-12-09 05:19:07.297796] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.028 [2024-12-09 05:19:07.297798] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.028 [2024-12-09 05:19:07.297800] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43bc0) on tqpair=0xadf750 01:24:25.028 [2024-12-09 05:19:07.297807] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.028 [2024-12-09 05:19:07.297810] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.028 [2024-12-09 05:19:07.297812] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadf750) 01:24:25.028 [2024-12-09 05:19:07.297817] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.028 [2024-12-09 05:19:07.297834] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43bc0, cid 3, qid 0 01:24:25.028 [2024-12-09 05:19:07.297869] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.028 [2024-12-09 05:19:07.297873] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.028 [2024-12-09 05:19:07.297875] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.028 [2024-12-09 05:19:07.297878] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43bc0) on tqpair=0xadf750 01:24:25.028 [2024-12-09 05:19:07.297884] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.028 [2024-12-09 05:19:07.297887] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.028 [2024-12-09 05:19:07.297889] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadf750) 01:24:25.028 [2024-12-09 05:19:07.297894] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.028 [2024-12-09 05:19:07.297912] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43bc0, cid 3, qid 0 01:24:25.028 [2024-12-09 05:19:07.297942] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.028 [2024-12-09 05:19:07.297946] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.028 [2024-12-09 05:19:07.297949] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.028 [2024-12-09 05:19:07.297951] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43bc0) on tqpair=0xadf750 01:24:25.028 [2024-12-09 05:19:07.297958] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.028 [2024-12-09 05:19:07.297960] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.028 [2024-12-09 05:19:07.297963] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadf750) 01:24:25.028 [2024-12-09 05:19:07.297968] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.028 [2024-12-09 05:19:07.297988] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43bc0, cid 3, qid 0 01:24:25.028 [2024-12-09 05:19:07.298030] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.028 [2024-12-09 05:19:07.298034] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.028 [2024-12-09 05:19:07.298037] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.028 [2024-12-09 05:19:07.298046] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43bc0) on tqpair=0xadf750 01:24:25.028 [2024-12-09 05:19:07.298053] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.028 [2024-12-09 05:19:07.298056] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.028 [2024-12-09 05:19:07.298059] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadf750) 01:24:25.028 [2024-12-09 05:19:07.298064] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.028 [2024-12-09 05:19:07.298074] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43bc0, cid 3, qid 0 01:24:25.028 [2024-12-09 05:19:07.298115] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.028 [2024-12-09 05:19:07.298120] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.028 [2024-12-09 05:19:07.298122] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.028 [2024-12-09 05:19:07.298125] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43bc0) on tqpair=0xadf750 01:24:25.028 [2024-12-09 05:19:07.298131] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.028 [2024-12-09 05:19:07.298134] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.029 [2024-12-09 05:19:07.298136] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadf750) 01:24:25.029 [2024-12-09 05:19:07.298141] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.029 [2024-12-09 05:19:07.298151] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43bc0, cid 3, qid 0 01:24:25.029 [2024-12-09 05:19:07.298184] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.029 [2024-12-09 05:19:07.298189] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.029 [2024-12-09 05:19:07.298191] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.029 [2024-12-09 05:19:07.298202] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43bc0) on tqpair=0xadf750 01:24:25.029 [2024-12-09 05:19:07.298209] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.029 [2024-12-09 05:19:07.298212] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.029 [2024-12-09 05:19:07.298214] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadf750) 01:24:25.029 [2024-12-09 05:19:07.298219] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.029 [2024-12-09 05:19:07.298229] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43bc0, cid 3, qid 0 01:24:25.029 [2024-12-09 05:19:07.298278] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.029 [2024-12-09 05:19:07.298282] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.029 [2024-12-09 05:19:07.298285] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.029 [2024-12-09 05:19:07.298287] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43bc0) on tqpair=0xadf750 01:24:25.029 [2024-12-09 05:19:07.298294] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.029 [2024-12-09 05:19:07.298297] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.029 [2024-12-09 05:19:07.298299] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadf750) 01:24:25.029 [2024-12-09 05:19:07.298304] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.029 [2024-12-09 05:19:07.298314] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43bc0, cid 3, qid 0 01:24:25.029 [2024-12-09 05:19:07.302338] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.029 [2024-12-09 05:19:07.302352] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.029 [2024-12-09 05:19:07.302355] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.029 [2024-12-09 05:19:07.302358] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43bc0) on tqpair=0xadf750 01:24:25.029 [2024-12-09 05:19:07.302365] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.029 [2024-12-09 05:19:07.302368] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.029 [2024-12-09 05:19:07.302371] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xadf750) 01:24:25.029 [2024-12-09 05:19:07.302376] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.029 [2024-12-09 05:19:07.302393] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb43bc0, cid 3, qid 0 01:24:25.029 [2024-12-09 05:19:07.302427] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.029 [2024-12-09 05:19:07.302432] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.029 [2024-12-09 05:19:07.302434] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.029 [2024-12-09 05:19:07.302437] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb43bc0) on tqpair=0xadf750 01:24:25.029 [2024-12-09 05:19:07.302442] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 9 milliseconds 01:24:25.029 01:24:25.029 05:19:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 01:24:25.029 [2024-12-09 05:19:07.414934] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:24:25.029 [2024-12-09 05:19:07.414965] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74043 ] 01:24:25.290 [2024-12-09 05:19:07.560171] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 01:24:25.290 [2024-12-09 05:19:07.560229] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 01:24:25.290 [2024-12-09 05:19:07.560233] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 01:24:25.290 [2024-12-09 05:19:07.560246] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 01:24:25.290 [2024-12-09 05:19:07.560255] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 01:24:25.290 [2024-12-09 05:19:07.560531] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 01:24:25.290 [2024-12-09 05:19:07.560581] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x119d750 0 01:24:25.290 [2024-12-09 05:19:07.573362] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 01:24:25.290 [2024-12-09 05:19:07.573381] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 01:24:25.290 [2024-12-09 05:19:07.573386] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 01:24:25.290 [2024-12-09 05:19:07.573388] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 01:24:25.290 [2024-12-09 05:19:07.573418] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.290 [2024-12-09 05:19:07.573422] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.290 [2024-12-09 05:19:07.573426] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x119d750) 01:24:25.290 [2024-12-09 05:19:07.573437] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 01:24:25.290 [2024-12-09 05:19:07.573460] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1201740, cid 0, qid 0 01:24:25.290 [2024-12-09 05:19:07.581362] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.290 [2024-12-09 05:19:07.581377] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.290 [2024-12-09 05:19:07.581380] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.290 [2024-12-09 05:19:07.581383] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1201740) on tqpair=0x119d750 01:24:25.290 [2024-12-09 05:19:07.581390] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 01:24:25.290 [2024-12-09 05:19:07.581395] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 01:24:25.290 [2024-12-09 05:19:07.581400] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 01:24:25.290 [2024-12-09 05:19:07.581413] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.290 [2024-12-09 05:19:07.581416] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.290 [2024-12-09 05:19:07.581418] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x119d750) 01:24:25.290 [2024-12-09 05:19:07.581424] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.290 [2024-12-09 05:19:07.581441] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1201740, cid 0, qid 0 01:24:25.290 [2024-12-09 05:19:07.581484] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.290 [2024-12-09 05:19:07.581488] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.290 [2024-12-09 05:19:07.581490] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.290 [2024-12-09 05:19:07.581493] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1201740) on tqpair=0x119d750 01:24:25.290 [2024-12-09 05:19:07.581497] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 01:24:25.291 [2024-12-09 05:19:07.581502] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 01:24:25.291 [2024-12-09 05:19:07.581506] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.291 [2024-12-09 05:19:07.581509] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.291 [2024-12-09 05:19:07.581511] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x119d750) 01:24:25.291 [2024-12-09 05:19:07.581516] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.291 [2024-12-09 05:19:07.581527] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1201740, cid 0, qid 0 01:24:25.291 [2024-12-09 05:19:07.581561] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.291 [2024-12-09 05:19:07.581565] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.291 [2024-12-09 05:19:07.581567] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.291 [2024-12-09 05:19:07.581570] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1201740) on tqpair=0x119d750 01:24:25.291 [2024-12-09 05:19:07.581573] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 01:24:25.291 [2024-12-09 05:19:07.581579] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 01:24:25.291 [2024-12-09 05:19:07.581584] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.291 [2024-12-09 05:19:07.581586] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.291 [2024-12-09 05:19:07.581588] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x119d750) 01:24:25.291 [2024-12-09 05:19:07.581593] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.291 [2024-12-09 05:19:07.581603] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1201740, cid 0, qid 0 01:24:25.291 [2024-12-09 05:19:07.581639] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.291 [2024-12-09 05:19:07.581643] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.291 [2024-12-09 05:19:07.581645] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.291 [2024-12-09 05:19:07.581648] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1201740) on tqpair=0x119d750 01:24:25.291 [2024-12-09 05:19:07.581651] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 01:24:25.291 [2024-12-09 05:19:07.581657] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.291 [2024-12-09 05:19:07.581660] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.291 [2024-12-09 05:19:07.581662] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x119d750) 01:24:25.291 [2024-12-09 05:19:07.581667] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.291 [2024-12-09 05:19:07.581677] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1201740, cid 0, qid 0 01:24:25.291 [2024-12-09 05:19:07.581713] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.291 [2024-12-09 05:19:07.581718] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.291 [2024-12-09 05:19:07.581720] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.291 [2024-12-09 05:19:07.581722] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1201740) on tqpair=0x119d750 01:24:25.291 [2024-12-09 05:19:07.581725] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 01:24:25.291 [2024-12-09 05:19:07.581728] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 01:24:25.291 [2024-12-09 05:19:07.581733] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 01:24:25.291 [2024-12-09 05:19:07.581837] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 01:24:25.291 [2024-12-09 05:19:07.581845] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 01:24:25.291 [2024-12-09 05:19:07.581852] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.291 [2024-12-09 05:19:07.581855] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.291 [2024-12-09 05:19:07.581858] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x119d750) 01:24:25.291 [2024-12-09 05:19:07.581864] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.291 [2024-12-09 05:19:07.581887] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1201740, cid 0, qid 0 01:24:25.291 [2024-12-09 05:19:07.581926] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.291 [2024-12-09 05:19:07.581932] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.291 [2024-12-09 05:19:07.581934] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.291 [2024-12-09 05:19:07.581937] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1201740) on tqpair=0x119d750 01:24:25.291 [2024-12-09 05:19:07.581940] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 01:24:25.291 [2024-12-09 05:19:07.581947] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.291 [2024-12-09 05:19:07.581950] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.291 [2024-12-09 05:19:07.581953] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x119d750) 01:24:25.291 [2024-12-09 05:19:07.581958] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.291 [2024-12-09 05:19:07.581969] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1201740, cid 0, qid 0 01:24:25.291 [2024-12-09 05:19:07.582008] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.291 [2024-12-09 05:19:07.582013] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.291 [2024-12-09 05:19:07.582016] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.291 [2024-12-09 05:19:07.582018] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1201740) on tqpair=0x119d750 01:24:25.291 [2024-12-09 05:19:07.582032] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 01:24:25.291 [2024-12-09 05:19:07.582037] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 01:24:25.291 [2024-12-09 05:19:07.582043] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 01:24:25.291 [2024-12-09 05:19:07.582051] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 01:24:25.291 [2024-12-09 05:19:07.582058] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.291 [2024-12-09 05:19:07.582061] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x119d750) 01:24:25.291 [2024-12-09 05:19:07.582067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.291 [2024-12-09 05:19:07.582078] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1201740, cid 0, qid 0 01:24:25.291 [2024-12-09 05:19:07.582164] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:24:25.291 [2024-12-09 05:19:07.582170] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:24:25.291 [2024-12-09 05:19:07.582172] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:24:25.291 [2024-12-09 05:19:07.582175] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x119d750): datao=0, datal=4096, cccid=0 01:24:25.291 [2024-12-09 05:19:07.582178] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1201740) on tqpair(0x119d750): expected_datao=0, payload_size=4096 01:24:25.291 [2024-12-09 05:19:07.582182] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.291 [2024-12-09 05:19:07.582189] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:24:25.291 [2024-12-09 05:19:07.582192] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:24:25.291 [2024-12-09 05:19:07.582206] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.291 [2024-12-09 05:19:07.582211] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.291 [2024-12-09 05:19:07.582214] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.292 [2024-12-09 05:19:07.582217] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1201740) on tqpair=0x119d750 01:24:25.292 [2024-12-09 05:19:07.582223] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 01:24:25.292 [2024-12-09 05:19:07.582226] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 01:24:25.292 [2024-12-09 05:19:07.582238] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 01:24:25.292 [2024-12-09 05:19:07.582245] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 01:24:25.292 [2024-12-09 05:19:07.582249] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 01:24:25.292 [2024-12-09 05:19:07.582252] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 01:24:25.292 [2024-12-09 05:19:07.582259] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 01:24:25.292 [2024-12-09 05:19:07.582264] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.292 [2024-12-09 05:19:07.582267] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.292 [2024-12-09 05:19:07.582270] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x119d750) 01:24:25.292 [2024-12-09 05:19:07.582275] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 01:24:25.292 [2024-12-09 05:19:07.582293] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1201740, cid 0, qid 0 01:24:25.292 [2024-12-09 05:19:07.582345] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.292 [2024-12-09 05:19:07.582351] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.292 [2024-12-09 05:19:07.582354] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.292 [2024-12-09 05:19:07.582357] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1201740) on tqpair=0x119d750 01:24:25.292 [2024-12-09 05:19:07.582363] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.292 [2024-12-09 05:19:07.582366] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.292 [2024-12-09 05:19:07.582368] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x119d750) 01:24:25.292 [2024-12-09 05:19:07.582373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:24:25.292 [2024-12-09 05:19:07.582378] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.292 [2024-12-09 05:19:07.582380] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.292 [2024-12-09 05:19:07.582383] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x119d750) 01:24:25.292 [2024-12-09 05:19:07.582387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:24:25.292 [2024-12-09 05:19:07.582392] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.292 [2024-12-09 05:19:07.582395] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.292 [2024-12-09 05:19:07.582397] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x119d750) 01:24:25.292 [2024-12-09 05:19:07.582401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:24:25.292 [2024-12-09 05:19:07.582406] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.292 [2024-12-09 05:19:07.582408] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.292 [2024-12-09 05:19:07.582419] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x119d750) 01:24:25.292 [2024-12-09 05:19:07.582423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:24:25.292 [2024-12-09 05:19:07.582427] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 01:24:25.292 [2024-12-09 05:19:07.582433] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 01:24:25.292 [2024-12-09 05:19:07.582438] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.292 [2024-12-09 05:19:07.582441] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x119d750) 01:24:25.292 [2024-12-09 05:19:07.582446] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.292 [2024-12-09 05:19:07.582469] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1201740, cid 0, qid 0 01:24:25.292 [2024-12-09 05:19:07.582474] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12018c0, cid 1, qid 0 01:24:25.292 [2024-12-09 05:19:07.582478] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1201a40, cid 2, qid 0 01:24:25.292 [2024-12-09 05:19:07.582481] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1201bc0, cid 3, qid 0 01:24:25.292 [2024-12-09 05:19:07.582485] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1201d40, cid 4, qid 0 01:24:25.292 [2024-12-09 05:19:07.582578] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.292 [2024-12-09 05:19:07.582587] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.292 [2024-12-09 05:19:07.582590] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.292 [2024-12-09 05:19:07.582593] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1201d40) on tqpair=0x119d750 01:24:25.292 [2024-12-09 05:19:07.582604] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 01:24:25.292 [2024-12-09 05:19:07.582608] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 01:24:25.292 [2024-12-09 05:19:07.582615] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 01:24:25.292 [2024-12-09 05:19:07.582619] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 01:24:25.292 [2024-12-09 05:19:07.582624] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.292 [2024-12-09 05:19:07.582627] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.292 [2024-12-09 05:19:07.582630] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x119d750) 01:24:25.292 [2024-12-09 05:19:07.582635] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 01:24:25.292 [2024-12-09 05:19:07.582646] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1201d40, cid 4, qid 0 01:24:25.292 [2024-12-09 05:19:07.582692] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.292 [2024-12-09 05:19:07.582697] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.292 [2024-12-09 05:19:07.582699] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.292 [2024-12-09 05:19:07.582702] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1201d40) on tqpair=0x119d750 01:24:25.292 [2024-12-09 05:19:07.582782] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 01:24:25.292 [2024-12-09 05:19:07.582793] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 01:24:25.292 [2024-12-09 05:19:07.582805] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.292 [2024-12-09 05:19:07.582808] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x119d750) 01:24:25.292 [2024-12-09 05:19:07.582812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.292 [2024-12-09 05:19:07.582823] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1201d40, cid 4, qid 0 01:24:25.292 [2024-12-09 05:19:07.582871] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:24:25.292 [2024-12-09 05:19:07.582876] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:24:25.292 [2024-12-09 05:19:07.582878] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:24:25.292 [2024-12-09 05:19:07.582880] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x119d750): datao=0, datal=4096, cccid=4 01:24:25.292 [2024-12-09 05:19:07.582883] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1201d40) on tqpair(0x119d750): expected_datao=0, payload_size=4096 01:24:25.293 [2024-12-09 05:19:07.582886] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.293 [2024-12-09 05:19:07.582891] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:24:25.293 [2024-12-09 05:19:07.582894] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:24:25.293 [2024-12-09 05:19:07.582900] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.293 [2024-12-09 05:19:07.582910] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.293 [2024-12-09 05:19:07.582913] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.293 [2024-12-09 05:19:07.582916] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1201d40) on tqpair=0x119d750 01:24:25.293 [2024-12-09 05:19:07.582923] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 01:24:25.293 [2024-12-09 05:19:07.582935] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 01:24:25.293 [2024-12-09 05:19:07.582942] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 01:24:25.293 [2024-12-09 05:19:07.582947] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.293 [2024-12-09 05:19:07.582950] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x119d750) 01:24:25.293 [2024-12-09 05:19:07.582954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.293 [2024-12-09 05:19:07.582965] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1201d40, cid 4, qid 0 01:24:25.293 [2024-12-09 05:19:07.583031] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:24:25.293 [2024-12-09 05:19:07.583035] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:24:25.293 [2024-12-09 05:19:07.583038] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:24:25.293 [2024-12-09 05:19:07.583040] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x119d750): datao=0, datal=4096, cccid=4 01:24:25.293 [2024-12-09 05:19:07.583043] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1201d40) on tqpair(0x119d750): expected_datao=0, payload_size=4096 01:24:25.293 [2024-12-09 05:19:07.583045] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.293 [2024-12-09 05:19:07.583050] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:24:25.293 [2024-12-09 05:19:07.583052] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:24:25.293 [2024-12-09 05:19:07.583057] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.293 [2024-12-09 05:19:07.583068] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.293 [2024-12-09 05:19:07.583079] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.293 [2024-12-09 05:19:07.583081] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1201d40) on tqpair=0x119d750 01:24:25.293 [2024-12-09 05:19:07.583095] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 01:24:25.293 [2024-12-09 05:19:07.583124] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 01:24:25.293 [2024-12-09 05:19:07.583130] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.293 [2024-12-09 05:19:07.583133] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x119d750) 01:24:25.293 [2024-12-09 05:19:07.583138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.293 [2024-12-09 05:19:07.583151] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1201d40, cid 4, qid 0 01:24:25.293 [2024-12-09 05:19:07.583196] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:24:25.293 [2024-12-09 05:19:07.583201] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:24:25.293 [2024-12-09 05:19:07.583204] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:24:25.293 [2024-12-09 05:19:07.583206] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x119d750): datao=0, datal=4096, cccid=4 01:24:25.293 [2024-12-09 05:19:07.583209] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1201d40) on tqpair(0x119d750): expected_datao=0, payload_size=4096 01:24:25.293 [2024-12-09 05:19:07.583212] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.293 [2024-12-09 05:19:07.583217] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:24:25.293 [2024-12-09 05:19:07.583220] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:24:25.293 [2024-12-09 05:19:07.583233] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.293 [2024-12-09 05:19:07.583239] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.293 [2024-12-09 05:19:07.583241] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.293 [2024-12-09 05:19:07.583244] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1201d40) on tqpair=0x119d750 01:24:25.293 [2024-12-09 05:19:07.583250] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 01:24:25.293 [2024-12-09 05:19:07.583264] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 01:24:25.293 [2024-12-09 05:19:07.583272] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 01:24:25.293 [2024-12-09 05:19:07.583277] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 01:24:25.293 [2024-12-09 05:19:07.583281] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 01:24:25.293 [2024-12-09 05:19:07.583285] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 01:24:25.293 [2024-12-09 05:19:07.583289] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 01:24:25.293 [2024-12-09 05:19:07.583299] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 01:24:25.293 [2024-12-09 05:19:07.583303] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 01:24:25.293 [2024-12-09 05:19:07.583317] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.293 [2024-12-09 05:19:07.583320] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x119d750) 01:24:25.293 [2024-12-09 05:19:07.583349] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.293 [2024-12-09 05:19:07.583356] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.293 [2024-12-09 05:19:07.583359] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.293 [2024-12-09 05:19:07.583361] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x119d750) 01:24:25.293 [2024-12-09 05:19:07.583372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 01:24:25.293 [2024-12-09 05:19:07.583395] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1201d40, cid 4, qid 0 01:24:25.293 [2024-12-09 05:19:07.583400] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1201ec0, cid 5, qid 0 01:24:25.293 [2024-12-09 05:19:07.583461] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.293 [2024-12-09 05:19:07.583466] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.293 [2024-12-09 05:19:07.583469] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.293 [2024-12-09 05:19:07.583472] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1201d40) on tqpair=0x119d750 01:24:25.293 [2024-12-09 05:19:07.583477] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.293 [2024-12-09 05:19:07.583482] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.293 [2024-12-09 05:19:07.583484] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.293 [2024-12-09 05:19:07.583495] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1201ec0) on tqpair=0x119d750 01:24:25.293 [2024-12-09 05:19:07.583503] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.293 [2024-12-09 05:19:07.583506] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x119d750) 01:24:25.293 [2024-12-09 05:19:07.583511] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.293 [2024-12-09 05:19:07.583522] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1201ec0, cid 5, qid 0 01:24:25.293 [2024-12-09 05:19:07.583560] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.293 [2024-12-09 05:19:07.583573] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.293 [2024-12-09 05:19:07.583575] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.293 [2024-12-09 05:19:07.583578] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1201ec0) on tqpair=0x119d750 01:24:25.294 [2024-12-09 05:19:07.583586] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.294 [2024-12-09 05:19:07.583588] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x119d750) 01:24:25.294 [2024-12-09 05:19:07.583593] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.294 [2024-12-09 05:19:07.583611] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1201ec0, cid 5, qid 0 01:24:25.294 [2024-12-09 05:19:07.583664] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.294 [2024-12-09 05:19:07.583676] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.294 [2024-12-09 05:19:07.583679] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.294 [2024-12-09 05:19:07.583682] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1201ec0) on tqpair=0x119d750 01:24:25.294 [2024-12-09 05:19:07.583689] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.294 [2024-12-09 05:19:07.583692] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x119d750) 01:24:25.294 [2024-12-09 05:19:07.583697] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.294 [2024-12-09 05:19:07.583714] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1201ec0, cid 5, qid 0 01:24:25.294 [2024-12-09 05:19:07.583755] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.294 [2024-12-09 05:19:07.583761] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.294 [2024-12-09 05:19:07.583763] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.294 [2024-12-09 05:19:07.583766] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1201ec0) on tqpair=0x119d750 01:24:25.294 [2024-12-09 05:19:07.583785] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.294 [2024-12-09 05:19:07.583789] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x119d750) 01:24:25.294 [2024-12-09 05:19:07.583794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.294 [2024-12-09 05:19:07.583800] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.294 [2024-12-09 05:19:07.583803] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x119d750) 01:24:25.294 [2024-12-09 05:19:07.583813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.294 [2024-12-09 05:19:07.583819] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.294 [2024-12-09 05:19:07.583822] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x119d750) 01:24:25.294 [2024-12-09 05:19:07.583826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.294 [2024-12-09 05:19:07.583833] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.294 [2024-12-09 05:19:07.583835] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x119d750) 01:24:25.294 [2024-12-09 05:19:07.583841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.294 [2024-12-09 05:19:07.583863] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1201ec0, cid 5, qid 0 01:24:25.294 [2024-12-09 05:19:07.583868] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1201d40, cid 4, qid 0 01:24:25.294 [2024-12-09 05:19:07.583871] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1202040, cid 6, qid 0 01:24:25.294 [2024-12-09 05:19:07.583875] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12021c0, cid 7, qid 0 01:24:25.294 [2024-12-09 05:19:07.584000] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:24:25.294 [2024-12-09 05:19:07.584013] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:24:25.294 [2024-12-09 05:19:07.584016] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:24:25.294 [2024-12-09 05:19:07.584019] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x119d750): datao=0, datal=8192, cccid=5 01:24:25.294 [2024-12-09 05:19:07.584022] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1201ec0) on tqpair(0x119d750): expected_datao=0, payload_size=8192 01:24:25.294 [2024-12-09 05:19:07.584025] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.294 [2024-12-09 05:19:07.584038] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:24:25.294 [2024-12-09 05:19:07.584042] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:24:25.294 [2024-12-09 05:19:07.584046] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:24:25.294 [2024-12-09 05:19:07.584051] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:24:25.294 [2024-12-09 05:19:07.584053] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:24:25.294 [2024-12-09 05:19:07.584056] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x119d750): datao=0, datal=512, cccid=4 01:24:25.294 [2024-12-09 05:19:07.584059] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1201d40) on tqpair(0x119d750): expected_datao=0, payload_size=512 01:24:25.294 [2024-12-09 05:19:07.584062] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.294 [2024-12-09 05:19:07.584067] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:24:25.294 [2024-12-09 05:19:07.584069] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:24:25.294 [2024-12-09 05:19:07.584074] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:24:25.294 [2024-12-09 05:19:07.584079] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:24:25.294 [2024-12-09 05:19:07.584081] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:24:25.294 [2024-12-09 05:19:07.584084] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x119d750): datao=0, datal=512, cccid=6 01:24:25.294 [2024-12-09 05:19:07.584087] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1202040) on tqpair(0x119d750): expected_datao=0, payload_size=512 01:24:25.294 [2024-12-09 05:19:07.584089] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.294 [2024-12-09 05:19:07.584095] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:24:25.294 [2024-12-09 05:19:07.584097] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:24:25.294 [2024-12-09 05:19:07.584101] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:24:25.294 [2024-12-09 05:19:07.584106] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:24:25.294 [2024-12-09 05:19:07.584108] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:24:25.294 [2024-12-09 05:19:07.584110] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x119d750): datao=0, datal=4096, cccid=7 01:24:25.294 [2024-12-09 05:19:07.584113] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12021c0) on tqpair(0x119d750): expected_datao=0, payload_size=4096 01:24:25.294 [2024-12-09 05:19:07.584116] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.294 [2024-12-09 05:19:07.584122] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:24:25.294 [2024-12-09 05:19:07.584125] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:24:25.294 [2024-12-09 05:19:07.584131] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.294 [2024-12-09 05:19:07.584135] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.294 [2024-12-09 05:19:07.584145] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.294 [2024-12-09 05:19:07.584148] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1201ec0) on tqpair=0x119d750 01:24:25.294 [2024-12-09 05:19:07.584159] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.294 [2024-12-09 05:19:07.584164] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.294 [2024-12-09 05:19:07.584166] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.294 [2024-12-09 05:19:07.584169] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1201d40) on tqpair=0x119d750 01:24:25.294 [2024-12-09 05:19:07.584186] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.294 [2024-12-09 05:19:07.584191] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.294 [2024-12-09 05:19:07.584194] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.294 [2024-12-09 05:19:07.584197] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1202040) on tqpair=0x119d750 01:24:25.294 [2024-12-09 05:19:07.584203] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.294 [2024-12-09 05:19:07.584207] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.294 [2024-12-09 05:19:07.584210] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.294 [2024-12-09 05:19:07.584212] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12021c0) on tqpair=0x119d750 01:24:25.295 ===================================================== 01:24:25.295 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 01:24:25.295 ===================================================== 01:24:25.295 Controller Capabilities/Features 01:24:25.295 ================================ 01:24:25.295 Vendor ID: 8086 01:24:25.295 Subsystem Vendor ID: 8086 01:24:25.295 Serial Number: SPDK00000000000001 01:24:25.295 Model Number: SPDK bdev Controller 01:24:25.295 Firmware Version: 25.01 01:24:25.295 Recommended Arb Burst: 6 01:24:25.295 IEEE OUI Identifier: e4 d2 5c 01:24:25.295 Multi-path I/O 01:24:25.295 May have multiple subsystem ports: Yes 01:24:25.295 May have multiple controllers: Yes 01:24:25.295 Associated with SR-IOV VF: No 01:24:25.295 Max Data Transfer Size: 131072 01:24:25.295 Max Number of Namespaces: 32 01:24:25.295 Max Number of I/O Queues: 127 01:24:25.295 NVMe Specification Version (VS): 1.3 01:24:25.295 NVMe Specification Version (Identify): 1.3 01:24:25.295 Maximum Queue Entries: 128 01:24:25.295 Contiguous Queues Required: Yes 01:24:25.295 Arbitration Mechanisms Supported 01:24:25.295 Weighted Round Robin: Not Supported 01:24:25.295 Vendor Specific: Not Supported 01:24:25.295 Reset Timeout: 15000 ms 01:24:25.295 Doorbell Stride: 4 bytes 01:24:25.295 NVM Subsystem Reset: Not Supported 01:24:25.295 Command Sets Supported 01:24:25.295 NVM Command Set: Supported 01:24:25.295 Boot Partition: Not Supported 01:24:25.295 Memory Page Size Minimum: 4096 bytes 01:24:25.295 Memory Page Size Maximum: 4096 bytes 01:24:25.295 Persistent Memory Region: Not Supported 01:24:25.295 Optional Asynchronous Events Supported 01:24:25.295 Namespace Attribute Notices: Supported 01:24:25.295 Firmware Activation Notices: Not Supported 01:24:25.295 ANA Change Notices: Not Supported 01:24:25.295 PLE Aggregate Log Change Notices: Not Supported 01:24:25.295 LBA Status Info Alert Notices: Not Supported 01:24:25.295 EGE Aggregate Log Change Notices: Not Supported 01:24:25.295 Normal NVM Subsystem Shutdown event: Not Supported 01:24:25.295 Zone Descriptor Change Notices: Not Supported 01:24:25.295 Discovery Log Change Notices: Not Supported 01:24:25.295 Controller Attributes 01:24:25.295 128-bit Host Identifier: Supported 01:24:25.295 Non-Operational Permissive Mode: Not Supported 01:24:25.295 NVM Sets: Not Supported 01:24:25.295 Read Recovery Levels: Not Supported 01:24:25.295 Endurance Groups: Not Supported 01:24:25.295 Predictable Latency Mode: Not Supported 01:24:25.295 Traffic Based Keep ALive: Not Supported 01:24:25.295 Namespace Granularity: Not Supported 01:24:25.295 SQ Associations: Not Supported 01:24:25.295 UUID List: Not Supported 01:24:25.295 Multi-Domain Subsystem: Not Supported 01:24:25.295 Fixed Capacity Management: Not Supported 01:24:25.295 Variable Capacity Management: Not Supported 01:24:25.295 Delete Endurance Group: Not Supported 01:24:25.295 Delete NVM Set: Not Supported 01:24:25.295 Extended LBA Formats Supported: Not Supported 01:24:25.295 Flexible Data Placement Supported: Not Supported 01:24:25.295 01:24:25.295 Controller Memory Buffer Support 01:24:25.295 ================================ 01:24:25.295 Supported: No 01:24:25.295 01:24:25.295 Persistent Memory Region Support 01:24:25.295 ================================ 01:24:25.295 Supported: No 01:24:25.295 01:24:25.295 Admin Command Set Attributes 01:24:25.295 ============================ 01:24:25.295 Security Send/Receive: Not Supported 01:24:25.295 Format NVM: Not Supported 01:24:25.295 Firmware Activate/Download: Not Supported 01:24:25.295 Namespace Management: Not Supported 01:24:25.295 Device Self-Test: Not Supported 01:24:25.295 Directives: Not Supported 01:24:25.295 NVMe-MI: Not Supported 01:24:25.295 Virtualization Management: Not Supported 01:24:25.295 Doorbell Buffer Config: Not Supported 01:24:25.295 Get LBA Status Capability: Not Supported 01:24:25.295 Command & Feature Lockdown Capability: Not Supported 01:24:25.295 Abort Command Limit: 4 01:24:25.295 Async Event Request Limit: 4 01:24:25.295 Number of Firmware Slots: N/A 01:24:25.295 Firmware Slot 1 Read-Only: N/A 01:24:25.295 Firmware Activation Without Reset: N/A 01:24:25.295 Multiple Update Detection Support: N/A 01:24:25.295 Firmware Update Granularity: No Information Provided 01:24:25.295 Per-Namespace SMART Log: No 01:24:25.295 Asymmetric Namespace Access Log Page: Not Supported 01:24:25.295 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 01:24:25.295 Command Effects Log Page: Supported 01:24:25.295 Get Log Page Extended Data: Supported 01:24:25.295 Telemetry Log Pages: Not Supported 01:24:25.295 Persistent Event Log Pages: Not Supported 01:24:25.295 Supported Log Pages Log Page: May Support 01:24:25.295 Commands Supported & Effects Log Page: Not Supported 01:24:25.295 Feature Identifiers & Effects Log Page:May Support 01:24:25.295 NVMe-MI Commands & Effects Log Page: May Support 01:24:25.295 Data Area 4 for Telemetry Log: Not Supported 01:24:25.295 Error Log Page Entries Supported: 128 01:24:25.295 Keep Alive: Supported 01:24:25.295 Keep Alive Granularity: 10000 ms 01:24:25.295 01:24:25.295 NVM Command Set Attributes 01:24:25.295 ========================== 01:24:25.295 Submission Queue Entry Size 01:24:25.295 Max: 64 01:24:25.295 Min: 64 01:24:25.295 Completion Queue Entry Size 01:24:25.295 Max: 16 01:24:25.295 Min: 16 01:24:25.295 Number of Namespaces: 32 01:24:25.295 Compare Command: Supported 01:24:25.295 Write Uncorrectable Command: Not Supported 01:24:25.295 Dataset Management Command: Supported 01:24:25.295 Write Zeroes Command: Supported 01:24:25.295 Set Features Save Field: Not Supported 01:24:25.295 Reservations: Supported 01:24:25.295 Timestamp: Not Supported 01:24:25.295 Copy: Supported 01:24:25.295 Volatile Write Cache: Present 01:24:25.295 Atomic Write Unit (Normal): 1 01:24:25.295 Atomic Write Unit (PFail): 1 01:24:25.295 Atomic Compare & Write Unit: 1 01:24:25.295 Fused Compare & Write: Supported 01:24:25.295 Scatter-Gather List 01:24:25.295 SGL Command Set: Supported 01:24:25.295 SGL Keyed: Supported 01:24:25.295 SGL Bit Bucket Descriptor: Not Supported 01:24:25.295 SGL Metadata Pointer: Not Supported 01:24:25.295 Oversized SGL: Not Supported 01:24:25.295 SGL Metadata Address: Not Supported 01:24:25.295 SGL Offset: Supported 01:24:25.296 Transport SGL Data Block: Not Supported 01:24:25.296 Replay Protected Memory Block: Not Supported 01:24:25.296 01:24:25.296 Firmware Slot Information 01:24:25.296 ========================= 01:24:25.296 Active slot: 1 01:24:25.296 Slot 1 Firmware Revision: 25.01 01:24:25.296 01:24:25.296 01:24:25.296 Commands Supported and Effects 01:24:25.296 ============================== 01:24:25.296 Admin Commands 01:24:25.296 -------------- 01:24:25.296 Get Log Page (02h): Supported 01:24:25.296 Identify (06h): Supported 01:24:25.296 Abort (08h): Supported 01:24:25.296 Set Features (09h): Supported 01:24:25.296 Get Features (0Ah): Supported 01:24:25.296 Asynchronous Event Request (0Ch): Supported 01:24:25.296 Keep Alive (18h): Supported 01:24:25.296 I/O Commands 01:24:25.296 ------------ 01:24:25.296 Flush (00h): Supported LBA-Change 01:24:25.296 Write (01h): Supported LBA-Change 01:24:25.296 Read (02h): Supported 01:24:25.296 Compare (05h): Supported 01:24:25.296 Write Zeroes (08h): Supported LBA-Change 01:24:25.296 Dataset Management (09h): Supported LBA-Change 01:24:25.296 Copy (19h): Supported LBA-Change 01:24:25.296 01:24:25.296 Error Log 01:24:25.296 ========= 01:24:25.296 01:24:25.296 Arbitration 01:24:25.296 =========== 01:24:25.296 Arbitration Burst: 1 01:24:25.296 01:24:25.296 Power Management 01:24:25.296 ================ 01:24:25.296 Number of Power States: 1 01:24:25.296 Current Power State: Power State #0 01:24:25.296 Power State #0: 01:24:25.296 Max Power: 0.00 W 01:24:25.296 Non-Operational State: Operational 01:24:25.296 Entry Latency: Not Reported 01:24:25.296 Exit Latency: Not Reported 01:24:25.296 Relative Read Throughput: 0 01:24:25.296 Relative Read Latency: 0 01:24:25.296 Relative Write Throughput: 0 01:24:25.296 Relative Write Latency: 0 01:24:25.296 Idle Power: Not Reported 01:24:25.296 Active Power: Not Reported 01:24:25.296 Non-Operational Permissive Mode: Not Supported 01:24:25.296 01:24:25.296 Health Information 01:24:25.296 ================== 01:24:25.296 Critical Warnings: 01:24:25.296 Available Spare Space: OK 01:24:25.296 Temperature: OK 01:24:25.296 Device Reliability: OK 01:24:25.296 Read Only: No 01:24:25.296 Volatile Memory Backup: OK 01:24:25.296 Current Temperature: 0 Kelvin (-273 Celsius) 01:24:25.296 Temperature Threshold: 0 Kelvin (-273 Celsius) 01:24:25.296 Available Spare: 0% 01:24:25.296 Available Spare Threshold: 0% 01:24:25.296 Life Percentage Used:[2024-12-09 05:19:07.584348] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.296 [2024-12-09 05:19:07.584353] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x119d750) 01:24:25.296 [2024-12-09 05:19:07.584358] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.296 [2024-12-09 05:19:07.584372] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12021c0, cid 7, qid 0 01:24:25.296 [2024-12-09 05:19:07.584413] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.296 [2024-12-09 05:19:07.584417] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.296 [2024-12-09 05:19:07.584419] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.296 [2024-12-09 05:19:07.584422] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12021c0) on tqpair=0x119d750 01:24:25.296 [2024-12-09 05:19:07.584449] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 01:24:25.296 [2024-12-09 05:19:07.584463] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1201740) on tqpair=0x119d750 01:24:25.296 [2024-12-09 05:19:07.584468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:24:25.296 [2024-12-09 05:19:07.584471] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12018c0) on tqpair=0x119d750 01:24:25.296 [2024-12-09 05:19:07.584474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:24:25.296 [2024-12-09 05:19:07.584477] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1201a40) on tqpair=0x119d750 01:24:25.296 [2024-12-09 05:19:07.584480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:24:25.296 [2024-12-09 05:19:07.584483] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1201bc0) on tqpair=0x119d750 01:24:25.296 [2024-12-09 05:19:07.584486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:24:25.296 [2024-12-09 05:19:07.584492] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.296 [2024-12-09 05:19:07.584495] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.296 [2024-12-09 05:19:07.584497] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x119d750) 01:24:25.296 [2024-12-09 05:19:07.584502] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.296 [2024-12-09 05:19:07.584515] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1201bc0, cid 3, qid 0 01:24:25.296 [2024-12-09 05:19:07.584554] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.296 [2024-12-09 05:19:07.584559] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.296 [2024-12-09 05:19:07.584561] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.296 [2024-12-09 05:19:07.584563] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1201bc0) on tqpair=0x119d750 01:24:25.297 [2024-12-09 05:19:07.584568] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.297 [2024-12-09 05:19:07.584570] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.297 [2024-12-09 05:19:07.584572] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x119d750) 01:24:25.297 [2024-12-09 05:19:07.584577] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.297 [2024-12-09 05:19:07.584589] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1201bc0, cid 3, qid 0 01:24:25.297 [2024-12-09 05:19:07.584651] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.297 [2024-12-09 05:19:07.584656] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.297 [2024-12-09 05:19:07.584658] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.297 [2024-12-09 05:19:07.584661] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1201bc0) on tqpair=0x119d750 01:24:25.297 [2024-12-09 05:19:07.584664] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 01:24:25.297 [2024-12-09 05:19:07.584674] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 01:24:25.297 [2024-12-09 05:19:07.584680] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.297 [2024-12-09 05:19:07.584683] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.297 [2024-12-09 05:19:07.584685] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x119d750) 01:24:25.297 [2024-12-09 05:19:07.584690] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.297 [2024-12-09 05:19:07.584700] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1201bc0, cid 3, qid 0 01:24:25.297 [2024-12-09 05:19:07.584732] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.297 [2024-12-09 05:19:07.584736] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.297 [2024-12-09 05:19:07.584739] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.297 [2024-12-09 05:19:07.584741] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1201bc0) on tqpair=0x119d750 01:24:25.297 [2024-12-09 05:19:07.584754] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.297 [2024-12-09 05:19:07.584757] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.297 [2024-12-09 05:19:07.584760] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x119d750) 01:24:25.297 [2024-12-09 05:19:07.584769] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.297 [2024-12-09 05:19:07.584779] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1201bc0, cid 3, qid 0 01:24:25.297 [2024-12-09 05:19:07.584817] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.297 [2024-12-09 05:19:07.584822] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.297 [2024-12-09 05:19:07.584824] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.297 [2024-12-09 05:19:07.584826] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1201bc0) on tqpair=0x119d750 01:24:25.297 [2024-12-09 05:19:07.584833] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.297 [2024-12-09 05:19:07.584835] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.297 [2024-12-09 05:19:07.584838] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x119d750) 01:24:25.297 [2024-12-09 05:19:07.584842] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.297 [2024-12-09 05:19:07.584858] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1201bc0, cid 3, qid 0 01:24:25.297 [2024-12-09 05:19:07.584891] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.297 [2024-12-09 05:19:07.584895] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.297 [2024-12-09 05:19:07.584898] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.297 [2024-12-09 05:19:07.584900] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1201bc0) on tqpair=0x119d750 01:24:25.297 [2024-12-09 05:19:07.584907] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.297 [2024-12-09 05:19:07.584909] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.297 [2024-12-09 05:19:07.584912] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x119d750) 01:24:25.297 [2024-12-09 05:19:07.584922] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.297 [2024-12-09 05:19:07.584932] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1201bc0, cid 3, qid 0 01:24:25.297 [2024-12-09 05:19:07.584968] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.297 [2024-12-09 05:19:07.584973] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.297 [2024-12-09 05:19:07.584975] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.297 [2024-12-09 05:19:07.584978] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1201bc0) on tqpair=0x119d750 01:24:25.297 [2024-12-09 05:19:07.584984] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.297 [2024-12-09 05:19:07.584987] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.297 [2024-12-09 05:19:07.584996] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x119d750) 01:24:25.297 [2024-12-09 05:19:07.585001] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.297 [2024-12-09 05:19:07.585010] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1201bc0, cid 3, qid 0 01:24:25.297 [2024-12-09 05:19:07.585046] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.297 [2024-12-09 05:19:07.585051] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.297 [2024-12-09 05:19:07.585053] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.297 [2024-12-09 05:19:07.585055] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1201bc0) on tqpair=0x119d750 01:24:25.297 [2024-12-09 05:19:07.585069] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.297 [2024-12-09 05:19:07.585071] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.297 [2024-12-09 05:19:07.585074] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x119d750) 01:24:25.297 [2024-12-09 05:19:07.585079] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.297 [2024-12-09 05:19:07.585088] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1201bc0, cid 3, qid 0 01:24:25.297 [2024-12-09 05:19:07.585123] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.297 [2024-12-09 05:19:07.585128] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.297 [2024-12-09 05:19:07.585130] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.297 [2024-12-09 05:19:07.585132] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1201bc0) on tqpair=0x119d750 01:24:25.297 [2024-12-09 05:19:07.585145] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.297 [2024-12-09 05:19:07.585148] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.297 [2024-12-09 05:19:07.585151] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x119d750) 01:24:25.297 [2024-12-09 05:19:07.585155] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.297 [2024-12-09 05:19:07.585170] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1201bc0, cid 3, qid 0 01:24:25.297 [2024-12-09 05:19:07.585204] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.297 [2024-12-09 05:19:07.585208] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.297 [2024-12-09 05:19:07.585210] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.297 [2024-12-09 05:19:07.585213] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1201bc0) on tqpair=0x119d750 01:24:25.297 [2024-12-09 05:19:07.585219] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.297 [2024-12-09 05:19:07.585222] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.297 [2024-12-09 05:19:07.585225] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x119d750) 01:24:25.297 [2024-12-09 05:19:07.585229] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.297 [2024-12-09 05:19:07.585247] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1201bc0, cid 3, qid 0 01:24:25.297 [2024-12-09 05:19:07.585272] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.298 [2024-12-09 05:19:07.585277] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.298 [2024-12-09 05:19:07.585279] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.298 [2024-12-09 05:19:07.585281] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1201bc0) on tqpair=0x119d750 01:24:25.298 [2024-12-09 05:19:07.585300] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.298 [2024-12-09 05:19:07.585303] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.298 [2024-12-09 05:19:07.585305] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x119d750) 01:24:25.298 [2024-12-09 05:19:07.585316] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.298 [2024-12-09 05:19:07.589332] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1201bc0, cid 3, qid 0 01:24:25.298 [2024-12-09 05:19:07.589352] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.298 [2024-12-09 05:19:07.589357] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.298 [2024-12-09 05:19:07.589359] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.298 [2024-12-09 05:19:07.589362] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1201bc0) on tqpair=0x119d750 01:24:25.298 [2024-12-09 05:19:07.589370] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:24:25.298 [2024-12-09 05:19:07.589373] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:24:25.298 [2024-12-09 05:19:07.589375] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x119d750) 01:24:25.298 [2024-12-09 05:19:07.589381] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:24:25.298 [2024-12-09 05:19:07.589396] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1201bc0, cid 3, qid 0 01:24:25.298 [2024-12-09 05:19:07.589436] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:24:25.298 [2024-12-09 05:19:07.589441] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:24:25.298 [2024-12-09 05:19:07.589443] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:24:25.298 [2024-12-09 05:19:07.589445] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1201bc0) on tqpair=0x119d750 01:24:25.298 [2024-12-09 05:19:07.589451] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds 01:24:25.298 0% 01:24:25.298 Data Units Read: 0 01:24:25.298 Data Units Written: 0 01:24:25.298 Host Read Commands: 0 01:24:25.298 Host Write Commands: 0 01:24:25.298 Controller Busy Time: 0 minutes 01:24:25.298 Power Cycles: 0 01:24:25.298 Power On Hours: 0 hours 01:24:25.298 Unsafe Shutdowns: 0 01:24:25.298 Unrecoverable Media Errors: 0 01:24:25.298 Lifetime Error Log Entries: 0 01:24:25.298 Warning Temperature Time: 0 minutes 01:24:25.298 Critical Temperature Time: 0 minutes 01:24:25.298 01:24:25.298 Number of Queues 01:24:25.298 ================ 01:24:25.298 Number of I/O Submission Queues: 127 01:24:25.298 Number of I/O Completion Queues: 127 01:24:25.298 01:24:25.298 Active Namespaces 01:24:25.298 ================= 01:24:25.298 Namespace ID:1 01:24:25.298 Error Recovery Timeout: Unlimited 01:24:25.298 Command Set Identifier: NVM (00h) 01:24:25.298 Deallocate: Supported 01:24:25.298 Deallocated/Unwritten Error: Not Supported 01:24:25.298 Deallocated Read Value: Unknown 01:24:25.298 Deallocate in Write Zeroes: Not Supported 01:24:25.298 Deallocated Guard Field: 0xFFFF 01:24:25.298 Flush: Supported 01:24:25.298 Reservation: Supported 01:24:25.298 Namespace Sharing Capabilities: Multiple Controllers 01:24:25.298 Size (in LBAs): 131072 (0GiB) 01:24:25.298 Capacity (in LBAs): 131072 (0GiB) 01:24:25.298 Utilization (in LBAs): 131072 (0GiB) 01:24:25.298 NGUID: ABCDEF0123456789ABCDEF0123456789 01:24:25.298 EUI64: ABCDEF0123456789 01:24:25.298 UUID: cb6a25d6-9a5c-4637-bb85-e5c0fb307978 01:24:25.298 Thin Provisioning: Not Supported 01:24:25.298 Per-NS Atomic Units: Yes 01:24:25.298 Atomic Boundary Size (Normal): 0 01:24:25.298 Atomic Boundary Size (PFail): 0 01:24:25.298 Atomic Boundary Offset: 0 01:24:25.298 Maximum Single Source Range Length: 65535 01:24:25.298 Maximum Copy Length: 65535 01:24:25.298 Maximum Source Range Count: 1 01:24:25.298 NGUID/EUI64 Never Reused: No 01:24:25.298 Namespace Write Protected: No 01:24:25.298 Number of LBA Formats: 1 01:24:25.298 Current LBA Format: LBA Format #00 01:24:25.298 LBA Format #00: Data Size: 512 Metadata Size: 0 01:24:25.298 01:24:25.298 05:19:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 01:24:25.298 05:19:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:24:25.298 05:19:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 01:24:25.298 05:19:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:24:25.557 05:19:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:24:25.557 05:19:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 01:24:25.557 05:19:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 01:24:25.557 05:19:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 01:24:25.557 05:19:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 01:24:25.557 05:19:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:24:25.557 05:19:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 01:24:25.557 05:19:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 01:24:25.557 05:19:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:24:25.557 rmmod nvme_tcp 01:24:25.557 rmmod nvme_fabrics 01:24:25.557 rmmod nvme_keyring 01:24:25.557 05:19:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:24:25.557 05:19:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 01:24:25.557 05:19:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 01:24:25.557 05:19:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 73996 ']' 01:24:25.557 05:19:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 73996 01:24:25.557 05:19:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 73996 ']' 01:24:25.557 05:19:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 73996 01:24:25.557 05:19:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 01:24:25.557 05:19:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:24:25.557 05:19:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73996 01:24:25.557 killing process with pid 73996 01:24:25.557 05:19:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:24:25.557 05:19:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:24:25.557 05:19:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73996' 01:24:25.557 05:19:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 73996 01:24:25.557 05:19:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 73996 01:24:25.816 05:19:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:24:25.816 05:19:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:24:25.816 05:19:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:24:25.816 05:19:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 01:24:25.816 05:19:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 01:24:25.816 05:19:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:24:25.816 05:19:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 01:24:25.816 05:19:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:24:25.816 05:19:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:24:25.816 05:19:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:24:25.816 05:19:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:24:25.816 05:19:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:24:25.816 05:19:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:24:25.816 05:19:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:24:25.816 05:19:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:24:25.816 05:19:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:24:25.816 05:19:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:24:25.816 05:19:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:24:25.816 05:19:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:24:25.816 05:19:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:24:26.075 05:19:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:24:26.075 05:19:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:24:26.075 05:19:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 01:24:26.075 05:19:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:24:26.075 05:19:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:24:26.075 05:19:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:24:26.075 05:19:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 01:24:26.075 01:24:26.075 real 0m3.085s 01:24:26.075 user 0m7.555s 01:24:26.075 sys 0m0.868s 01:24:26.075 05:19:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 01:24:26.075 05:19:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:24:26.075 ************************************ 01:24:26.075 END TEST nvmf_identify 01:24:26.075 ************************************ 01:24:26.075 05:19:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 01:24:26.075 05:19:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:24:26.075 05:19:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 01:24:26.075 05:19:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 01:24:26.075 ************************************ 01:24:26.075 START TEST nvmf_perf 01:24:26.075 ************************************ 01:24:26.075 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 01:24:26.333 * Looking for test storage... 01:24:26.333 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:24:26.333 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:24:26.333 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 01:24:26.333 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:24:26.333 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:24:26.333 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:24:26.333 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 01:24:26.333 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 01:24:26.333 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 01:24:26.333 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 01:24:26.333 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 01:24:26.333 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 01:24:26.333 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 01:24:26.333 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 01:24:26.333 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 01:24:26.333 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:24:26.333 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 01:24:26.333 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 01:24:26.333 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 01:24:26.333 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:24:26.333 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 01:24:26.333 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 01:24:26.333 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:24:26.333 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 01:24:26.333 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 01:24:26.334 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 01:24:26.334 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 01:24:26.334 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:24:26.334 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 01:24:26.334 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 01:24:26.334 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:24:26.334 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:24:26.334 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 01:24:26.334 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:24:26.334 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:24:26.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:24:26.334 --rc genhtml_branch_coverage=1 01:24:26.334 --rc genhtml_function_coverage=1 01:24:26.334 --rc genhtml_legend=1 01:24:26.334 --rc geninfo_all_blocks=1 01:24:26.334 --rc geninfo_unexecuted_blocks=1 01:24:26.334 01:24:26.334 ' 01:24:26.334 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:24:26.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:24:26.334 --rc genhtml_branch_coverage=1 01:24:26.334 --rc genhtml_function_coverage=1 01:24:26.334 --rc genhtml_legend=1 01:24:26.334 --rc geninfo_all_blocks=1 01:24:26.334 --rc geninfo_unexecuted_blocks=1 01:24:26.334 01:24:26.334 ' 01:24:26.334 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:24:26.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:24:26.334 --rc genhtml_branch_coverage=1 01:24:26.334 --rc genhtml_function_coverage=1 01:24:26.334 --rc genhtml_legend=1 01:24:26.334 --rc geninfo_all_blocks=1 01:24:26.334 --rc geninfo_unexecuted_blocks=1 01:24:26.334 01:24:26.334 ' 01:24:26.334 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:24:26.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:24:26.334 --rc genhtml_branch_coverage=1 01:24:26.334 --rc genhtml_function_coverage=1 01:24:26.334 --rc genhtml_legend=1 01:24:26.334 --rc geninfo_all_blocks=1 01:24:26.334 --rc geninfo_unexecuted_blocks=1 01:24:26.334 01:24:26.334 ' 01:24:26.334 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:24:26.334 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 01:24:26.334 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:24:26.334 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:24:26.334 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:24:26.334 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:24:26.334 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:24:26.334 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:24:26.334 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:24:26.334 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:24:26.334 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:24:26.334 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:24:26.334 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:24:26.334 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:24:26.334 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:24:26.334 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:24:26.334 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:24:26.334 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:24:26.334 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:24:26.334 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 01:24:26.334 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:24:26.334 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:24:26.334 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:24:26.334 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:24:26.334 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:24:26.334 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:24:26.334 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 01:24:26.334 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:24:26.334 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 01:24:26.334 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:24:26.334 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:24:26.334 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:24:26.334 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:24:26.334 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:24:26.334 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:24:26.334 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:24:26.334 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:24:26.334 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:24:26.334 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 01:24:26.334 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 01:24:26.334 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 01:24:26.334 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:24:26.334 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 01:24:26.334 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:24:26.334 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:24:26.334 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 01:24:26.334 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 01:24:26.334 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 01:24:26.334 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:24:26.334 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:24:26.334 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:24:26.334 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:24:26.334 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:24:26.334 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:24:26.334 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:24:26.334 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:24:26.335 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@460 -- # nvmf_veth_init 01:24:26.335 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:24:26.335 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:24:26.335 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:24:26.335 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:24:26.335 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:24:26.335 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:24:26.335 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:24:26.335 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:24:26.335 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:24:26.335 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:24:26.335 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:24:26.335 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:24:26.335 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:24:26.335 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:24:26.335 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:24:26.335 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:24:26.335 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:24:26.335 Cannot find device "nvmf_init_br" 01:24:26.335 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 01:24:26.335 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:24:26.335 Cannot find device "nvmf_init_br2" 01:24:26.335 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 01:24:26.335 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:24:26.335 Cannot find device "nvmf_tgt_br" 01:24:26.335 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 01:24:26.335 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:24:26.335 Cannot find device "nvmf_tgt_br2" 01:24:26.335 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 01:24:26.593 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:24:26.593 Cannot find device "nvmf_init_br" 01:24:26.593 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 01:24:26.593 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:24:26.593 Cannot find device "nvmf_init_br2" 01:24:26.593 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 01:24:26.593 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:24:26.593 Cannot find device "nvmf_tgt_br" 01:24:26.593 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 01:24:26.593 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:24:26.593 Cannot find device "nvmf_tgt_br2" 01:24:26.593 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 01:24:26.593 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:24:26.593 Cannot find device "nvmf_br" 01:24:26.593 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 01:24:26.593 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:24:26.593 Cannot find device "nvmf_init_if" 01:24:26.593 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 01:24:26.593 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:24:26.593 Cannot find device "nvmf_init_if2" 01:24:26.593 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 01:24:26.593 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:24:26.593 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:24:26.593 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 01:24:26.593 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:24:26.593 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:24:26.593 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 01:24:26.593 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:24:26.593 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:24:26.593 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:24:26.593 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:24:26.593 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:24:26.593 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:24:26.593 05:19:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:24:26.593 05:19:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:24:26.593 05:19:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:24:26.593 05:19:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:24:26.593 05:19:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:24:26.593 05:19:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:24:26.593 05:19:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:24:26.593 05:19:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:24:26.593 05:19:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:24:26.593 05:19:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:24:26.593 05:19:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:24:26.594 05:19:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:24:26.852 05:19:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:24:26.852 05:19:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:24:26.852 05:19:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:24:26.852 05:19:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:24:26.853 05:19:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:24:26.853 05:19:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:24:26.853 05:19:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:24:26.853 05:19:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:24:26.853 05:19:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:24:26.853 05:19:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:24:26.853 05:19:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:24:26.853 05:19:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:24:26.853 05:19:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:24:26.853 05:19:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:24:26.853 05:19:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:24:26.853 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:24:26.853 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 01:24:26.853 01:24:26.853 --- 10.0.0.3 ping statistics --- 01:24:26.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:24:26.853 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 01:24:26.853 05:19:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:24:26.853 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:24:26.853 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 01:24:26.853 01:24:26.853 --- 10.0.0.4 ping statistics --- 01:24:26.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:24:26.853 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 01:24:26.853 05:19:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:24:26.853 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:24:26.853 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 01:24:26.853 01:24:26.853 --- 10.0.0.1 ping statistics --- 01:24:26.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:24:26.853 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 01:24:26.853 05:19:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:24:26.853 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:24:26.853 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 01:24:26.853 01:24:26.853 --- 10.0.0.2 ping statistics --- 01:24:26.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:24:26.853 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 01:24:26.853 05:19:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:24:26.853 05:19:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@461 -- # return 0 01:24:26.853 05:19:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:24:26.853 05:19:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:24:26.853 05:19:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:24:26.853 05:19:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:24:26.853 05:19:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:24:26.853 05:19:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:24:26.853 05:19:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:24:26.853 05:19:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 01:24:26.853 05:19:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:24:26.853 05:19:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 01:24:26.853 05:19:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 01:24:26.853 05:19:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=74258 01:24:26.853 05:19:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 01:24:26.853 05:19:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 74258 01:24:26.853 05:19:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 74258 ']' 01:24:26.853 05:19:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:24:26.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:24:26.853 05:19:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 01:24:26.853 05:19:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:24:26.853 05:19:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 01:24:26.853 05:19:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 01:24:26.853 [2024-12-09 05:19:09.221454] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:24:26.853 [2024-12-09 05:19:09.221514] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:24:27.112 [2024-12-09 05:19:09.373875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:24:27.112 [2024-12-09 05:19:09.419415] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:24:27.112 [2024-12-09 05:19:09.419465] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:24:27.112 [2024-12-09 05:19:09.419471] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:24:27.112 [2024-12-09 05:19:09.419476] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:24:27.112 [2024-12-09 05:19:09.419480] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:24:27.112 [2024-12-09 05:19:09.420406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:24:27.112 [2024-12-09 05:19:09.420883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:24:27.112 [2024-12-09 05:19:09.421090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:24:27.112 [2024-12-09 05:19:09.421090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:24:27.112 [2024-12-09 05:19:09.463569] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:24:27.678 05:19:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:24:27.678 05:19:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 01:24:27.678 05:19:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:24:27.678 05:19:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 01:24:27.678 05:19:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 01:24:27.678 05:19:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:24:27.678 05:19:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 01:24:27.678 05:19:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 01:24:28.245 05:19:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 01:24:28.245 05:19:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 01:24:28.245 05:19:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 01:24:28.245 05:19:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:24:28.503 05:19:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 01:24:28.503 05:19:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 01:24:28.503 05:19:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 01:24:28.503 05:19:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 01:24:28.503 05:19:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 01:24:28.762 [2024-12-09 05:19:11.108785] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:24:28.762 05:19:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:24:29.026 05:19:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 01:24:29.026 05:19:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:24:29.292 05:19:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 01:24:29.292 05:19:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 01:24:29.562 05:19:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:24:29.562 [2024-12-09 05:19:11.948232] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:24:29.562 05:19:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 01:24:29.820 05:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 01:24:29.820 05:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 01:24:29.820 05:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 01:24:29.820 05:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 01:24:31.195 Initializing NVMe Controllers 01:24:31.195 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 01:24:31.195 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 01:24:31.195 Initialization complete. Launching workers. 01:24:31.195 ======================================================== 01:24:31.195 Latency(us) 01:24:31.195 Device Information : IOPS MiB/s Average min max 01:24:31.195 PCIE (0000:00:10.0) NSID 1 from core 0: 19736.24 77.09 1622.30 416.80 8311.36 01:24:31.195 ======================================================== 01:24:31.195 Total : 19736.24 77.09 1622.30 416.80 8311.36 01:24:31.195 01:24:31.195 05:19:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 01:24:32.568 Initializing NVMe Controllers 01:24:32.568 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 01:24:32.568 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 01:24:32.568 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 01:24:32.568 Initialization complete. Launching workers. 01:24:32.568 ======================================================== 01:24:32.569 Latency(us) 01:24:32.569 Device Information : IOPS MiB/s Average min max 01:24:32.569 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5130.99 20.04 193.89 72.22 6164.51 01:24:32.569 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.00 0.48 8103.95 5183.03 12069.63 01:24:32.569 ======================================================== 01:24:32.569 Total : 5254.99 20.53 380.54 72.22 12069.63 01:24:32.569 01:24:32.569 05:19:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 01:24:33.945 Initializing NVMe Controllers 01:24:33.945 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 01:24:33.945 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 01:24:33.945 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 01:24:33.945 Initialization complete. Launching workers. 01:24:33.945 ======================================================== 01:24:33.945 Latency(us) 01:24:33.945 Device Information : IOPS MiB/s Average min max 01:24:33.945 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10555.99 41.23 3032.29 473.83 6658.01 01:24:33.945 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4000.00 15.62 8045.35 7047.73 12523.44 01:24:33.945 ======================================================== 01:24:33.945 Total : 14555.99 56.86 4409.89 473.83 12523.44 01:24:33.945 01:24:33.945 05:19:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 01:24:33.945 05:19:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 01:24:36.484 Initializing NVMe Controllers 01:24:36.484 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 01:24:36.484 Controller IO queue size 128, less than required. 01:24:36.484 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 01:24:36.484 Controller IO queue size 128, less than required. 01:24:36.484 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 01:24:36.484 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 01:24:36.484 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 01:24:36.484 Initialization complete. Launching workers. 01:24:36.484 ======================================================== 01:24:36.484 Latency(us) 01:24:36.484 Device Information : IOPS MiB/s Average min max 01:24:36.484 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2323.79 580.95 55689.76 28867.96 104438.43 01:24:36.484 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 664.44 166.11 197385.79 50347.52 317221.38 01:24:36.484 ======================================================== 01:24:36.484 Total : 2988.23 747.06 87196.21 28867.96 317221.38 01:24:36.484 01:24:36.484 05:19:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 01:24:36.744 Initializing NVMe Controllers 01:24:36.744 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 01:24:36.744 Controller IO queue size 128, less than required. 01:24:36.744 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 01:24:36.744 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 01:24:36.744 Controller IO queue size 128, less than required. 01:24:36.744 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 01:24:36.744 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 01:24:36.744 WARNING: Some requested NVMe devices were skipped 01:24:36.744 No valid NVMe controllers or AIO or URING devices found 01:24:36.744 05:19:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 01:24:39.299 Initializing NVMe Controllers 01:24:39.299 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 01:24:39.299 Controller IO queue size 128, less than required. 01:24:39.299 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 01:24:39.299 Controller IO queue size 128, less than required. 01:24:39.299 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 01:24:39.299 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 01:24:39.299 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 01:24:39.299 Initialization complete. Launching workers. 01:24:39.299 01:24:39.299 ==================== 01:24:39.299 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 01:24:39.299 TCP transport: 01:24:39.299 polls: 25781 01:24:39.299 idle_polls: 21499 01:24:39.299 sock_completions: 4282 01:24:39.299 nvme_completions: 6277 01:24:39.299 submitted_requests: 9414 01:24:39.299 queued_requests: 1 01:24:39.299 01:24:39.299 ==================== 01:24:39.299 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 01:24:39.299 TCP transport: 01:24:39.299 polls: 29973 01:24:39.299 idle_polls: 24326 01:24:39.299 sock_completions: 5647 01:24:39.299 nvme_completions: 6877 01:24:39.299 submitted_requests: 10396 01:24:39.299 queued_requests: 1 01:24:39.299 ======================================================== 01:24:39.299 Latency(us) 01:24:39.299 Device Information : IOPS MiB/s Average min max 01:24:39.299 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1568.99 392.25 83347.14 46784.94 135403.13 01:24:39.299 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1718.99 429.75 75137.66 31911.93 141822.90 01:24:39.299 ======================================================== 01:24:39.299 Total : 3287.98 821.99 79055.14 31911.93 141822.90 01:24:39.299 01:24:39.299 05:19:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 01:24:39.593 05:19:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:24:39.593 05:19:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 01:24:39.593 05:19:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 01:24:39.593 05:19:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 01:24:39.593 05:19:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 01:24:39.593 05:19:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 01:24:39.593 05:19:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:24:39.593 05:19:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 01:24:39.593 05:19:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 01:24:39.593 05:19:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:24:39.593 rmmod nvme_tcp 01:24:39.593 rmmod nvme_fabrics 01:24:39.593 rmmod nvme_keyring 01:24:39.593 05:19:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:24:39.593 05:19:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 01:24:39.593 05:19:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 01:24:39.593 05:19:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 74258 ']' 01:24:39.593 05:19:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 74258 01:24:39.593 05:19:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 74258 ']' 01:24:39.593 05:19:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 74258 01:24:39.593 05:19:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 01:24:39.593 05:19:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:24:39.593 05:19:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74258 01:24:39.593 05:19:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:24:39.593 05:19:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:24:39.593 05:19:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74258' 01:24:39.593 killing process with pid 74258 01:24:39.593 05:19:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 74258 01:24:39.593 05:19:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 74258 01:24:40.971 05:19:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:24:40.971 05:19:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:24:40.971 05:19:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:24:40.971 05:19:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 01:24:40.971 05:19:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 01:24:40.971 05:19:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 01:24:40.971 05:19:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:24:40.971 05:19:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:24:40.971 05:19:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:24:40.971 05:19:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:24:41.229 05:19:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:24:41.229 05:19:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:24:41.229 05:19:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:24:41.229 05:19:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:24:41.229 05:19:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:24:41.229 05:19:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:24:41.229 05:19:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:24:41.229 05:19:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:24:41.229 05:19:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:24:41.229 05:19:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:24:41.229 05:19:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:24:41.229 05:19:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:24:41.229 05:19:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 01:24:41.229 05:19:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:24:41.229 05:19:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:24:41.229 05:19:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:24:41.488 05:19:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 01:24:41.488 01:24:41.488 real 0m15.267s 01:24:41.488 user 0m55.072s 01:24:41.488 sys 0m3.590s 01:24:41.488 05:19:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 01:24:41.488 05:19:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 01:24:41.488 ************************************ 01:24:41.488 END TEST nvmf_perf 01:24:41.488 ************************************ 01:24:41.488 05:19:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 01:24:41.488 05:19:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:24:41.488 05:19:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 01:24:41.488 05:19:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 01:24:41.488 ************************************ 01:24:41.488 START TEST nvmf_fio_host 01:24:41.488 ************************************ 01:24:41.488 05:19:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 01:24:41.488 * Looking for test storage... 01:24:41.488 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:24:41.488 05:19:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:24:41.488 05:19:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 01:24:41.488 05:19:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:24:41.747 05:19:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:24:41.747 05:19:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:24:41.747 05:19:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 01:24:41.747 05:19:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 01:24:41.747 05:19:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 01:24:41.747 05:19:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 01:24:41.747 05:19:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 01:24:41.747 05:19:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 01:24:41.747 05:19:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 01:24:41.747 05:19:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 01:24:41.747 05:19:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 01:24:41.747 05:19:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:24:41.747 05:19:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 01:24:41.747 05:19:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 01:24:41.747 05:19:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 01:24:41.747 05:19:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:24:41.747 05:19:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 01:24:41.747 05:19:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 01:24:41.747 05:19:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:24:41.747 05:19:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 01:24:41.747 05:19:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 01:24:41.747 05:19:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 01:24:41.747 05:19:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 01:24:41.747 05:19:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:24:41.747 05:19:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 01:24:41.747 05:19:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 01:24:41.747 05:19:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:24:41.747 05:19:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:24:41.747 05:19:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 01:24:41.747 05:19:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:24:41.747 05:19:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:24:41.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:24:41.747 --rc genhtml_branch_coverage=1 01:24:41.747 --rc genhtml_function_coverage=1 01:24:41.747 --rc genhtml_legend=1 01:24:41.747 --rc geninfo_all_blocks=1 01:24:41.747 --rc geninfo_unexecuted_blocks=1 01:24:41.747 01:24:41.747 ' 01:24:41.747 05:19:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:24:41.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:24:41.747 --rc genhtml_branch_coverage=1 01:24:41.747 --rc genhtml_function_coverage=1 01:24:41.747 --rc genhtml_legend=1 01:24:41.747 --rc geninfo_all_blocks=1 01:24:41.747 --rc geninfo_unexecuted_blocks=1 01:24:41.747 01:24:41.747 ' 01:24:41.747 05:19:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:24:41.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:24:41.747 --rc genhtml_branch_coverage=1 01:24:41.747 --rc genhtml_function_coverage=1 01:24:41.747 --rc genhtml_legend=1 01:24:41.747 --rc geninfo_all_blocks=1 01:24:41.747 --rc geninfo_unexecuted_blocks=1 01:24:41.747 01:24:41.747 ' 01:24:41.747 05:19:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:24:41.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:24:41.747 --rc genhtml_branch_coverage=1 01:24:41.747 --rc genhtml_function_coverage=1 01:24:41.747 --rc genhtml_legend=1 01:24:41.747 --rc geninfo_all_blocks=1 01:24:41.747 --rc geninfo_unexecuted_blocks=1 01:24:41.747 01:24:41.747 ' 01:24:41.747 05:19:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:24:41.747 05:19:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 01:24:41.747 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:24:41.747 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:24:41.747 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:24:41.748 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@460 -- # nvmf_veth_init 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:24:41.748 Cannot find device "nvmf_init_br" 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:24:41.748 Cannot find device "nvmf_init_br2" 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:24:41.748 Cannot find device "nvmf_tgt_br" 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:24:41.748 Cannot find device "nvmf_tgt_br2" 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:24:41.748 Cannot find device "nvmf_init_br" 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:24:41.748 Cannot find device "nvmf_init_br2" 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:24:41.748 Cannot find device "nvmf_tgt_br" 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:24:41.748 Cannot find device "nvmf_tgt_br2" 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 01:24:41.748 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:24:42.006 Cannot find device "nvmf_br" 01:24:42.006 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 01:24:42.006 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:24:42.006 Cannot find device "nvmf_init_if" 01:24:42.006 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 01:24:42.006 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:24:42.006 Cannot find device "nvmf_init_if2" 01:24:42.006 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 01:24:42.006 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:24:42.006 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:24:42.006 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 01:24:42.006 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:24:42.006 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:24:42.006 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 01:24:42.006 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:24:42.006 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:24:42.006 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:24:42.006 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:24:42.006 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:24:42.006 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:24:42.006 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:24:42.006 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:24:42.006 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:24:42.006 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:24:42.006 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:24:42.006 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:24:42.006 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:24:42.006 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:24:42.006 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:24:42.006 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:24:42.006 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:24:42.006 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:24:42.006 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:24:42.006 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:24:42.006 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:24:42.007 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:24:42.007 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:24:42.007 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:24:42.007 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:24:42.007 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:24:42.007 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:24:42.007 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:24:42.007 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:24:42.007 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:24:42.007 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:24:42.007 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:24:42.007 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:24:42.007 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:24:42.007 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 01:24:42.007 01:24:42.007 --- 10.0.0.3 ping statistics --- 01:24:42.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:24:42.007 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 01:24:42.007 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:24:42.007 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:24:42.007 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.033 ms 01:24:42.007 01:24:42.007 --- 10.0.0.4 ping statistics --- 01:24:42.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:24:42.007 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 01:24:42.007 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:24:42.007 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:24:42.007 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.015 ms 01:24:42.007 01:24:42.007 --- 10.0.0.1 ping statistics --- 01:24:42.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:24:42.007 rtt min/avg/max/mdev = 0.015/0.015/0.015/0.000 ms 01:24:42.007 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:24:42.007 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:24:42.007 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.042 ms 01:24:42.007 01:24:42.007 --- 10.0.0.2 ping statistics --- 01:24:42.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:24:42.007 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 01:24:42.007 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:24:42.007 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@461 -- # return 0 01:24:42.007 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:24:42.007 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:24:42.007 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:24:42.007 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:24:42.007 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:24:42.007 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:24:42.007 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:24:42.007 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 01:24:42.007 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 01:24:42.007 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 01:24:42.007 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 01:24:42.007 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=74734 01:24:42.007 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 01:24:42.007 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:24:42.007 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 74734 01:24:42.007 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 74734 ']' 01:24:42.007 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:24:42.007 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 01:24:42.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:24:42.007 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:24:42.007 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 01:24:42.007 05:19:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 01:24:42.265 [2024-12-09 05:19:24.478528] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:24:42.265 [2024-12-09 05:19:24.478653] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:24:42.265 [2024-12-09 05:19:24.631029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:24:42.265 [2024-12-09 05:19:24.682867] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:24:42.265 [2024-12-09 05:19:24.683000] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:24:42.265 [2024-12-09 05:19:24.683009] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:24:42.265 [2024-12-09 05:19:24.683014] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:24:42.265 [2024-12-09 05:19:24.683018] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:24:42.265 [2024-12-09 05:19:24.683917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:24:42.265 [2024-12-09 05:19:24.684103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:24:42.265 [2024-12-09 05:19:24.684218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:24:42.265 [2024-12-09 05:19:24.684213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:24:42.523 [2024-12-09 05:19:24.725303] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:24:43.105 05:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:24:43.105 05:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 01:24:43.105 05:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 01:24:43.105 [2024-12-09 05:19:25.551581] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:24:43.362 05:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 01:24:43.363 05:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 01:24:43.363 05:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 01:24:43.363 05:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 01:24:43.621 Malloc1 01:24:43.621 05:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:24:43.621 05:19:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 01:24:43.878 05:19:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:24:44.137 [2024-12-09 05:19:26.438980] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:24:44.137 05:19:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 01:24:44.395 05:19:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 01:24:44.395 05:19:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 01:24:44.395 05:19:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 01:24:44.395 05:19:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:24:44.395 05:19:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:24:44.395 05:19:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 01:24:44.395 05:19:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 01:24:44.395 05:19:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 01:24:44.395 05:19:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 01:24:44.395 05:19:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:24:44.395 05:19:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 01:24:44.395 05:19:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 01:24:44.395 05:19:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:24:44.395 05:19:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 01:24:44.395 05:19:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:24:44.395 05:19:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:24:44.395 05:19:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 01:24:44.395 05:19:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:24:44.395 05:19:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 01:24:44.395 05:19:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 01:24:44.395 05:19:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:24:44.395 05:19:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 01:24:44.395 05:19:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 01:24:44.653 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 01:24:44.653 fio-3.35 01:24:44.653 Starting 1 thread 01:24:47.184 01:24:47.184 test: (groupid=0, jobs=1): err= 0: pid=74806: Mon Dec 9 05:19:29 2024 01:24:47.184 read: IOPS=11.3k, BW=44.1MiB/s (46.3MB/s)(88.4MiB/2005msec) 01:24:47.184 slat (nsec): min=1504, max=445570, avg=1739.00, stdev=3732.19 01:24:47.184 clat (usec): min=2939, max=10968, avg=5945.17, stdev=515.53 01:24:47.184 lat (usec): min=2940, max=10970, avg=5946.91, stdev=515.58 01:24:47.184 clat percentiles (usec): 01:24:47.184 | 1.00th=[ 4948], 5.00th=[ 5276], 10.00th=[ 5407], 20.00th=[ 5604], 01:24:47.184 | 30.00th=[ 5669], 40.00th=[ 5800], 50.00th=[ 5932], 60.00th=[ 5997], 01:24:47.184 | 70.00th=[ 6128], 80.00th=[ 6325], 90.00th=[ 6521], 95.00th=[ 6783], 01:24:47.184 | 99.00th=[ 7373], 99.50th=[ 7963], 99.90th=[ 9634], 99.95th=[ 9634], 01:24:47.184 | 99.99th=[10290] 01:24:47.184 bw ( KiB/s): min=44360, max=46123, per=99.88%, avg=45116.75, stdev=889.80, samples=4 01:24:47.184 iops : min=11090, max=11530, avg=11279.00, stdev=222.17, samples=4 01:24:47.184 write: IOPS=11.2k, BW=43.9MiB/s (46.0MB/s)(88.0MiB/2005msec); 0 zone resets 01:24:47.184 slat (nsec): min=1550, max=235500, avg=1789.23, stdev=1867.34 01:24:47.184 clat (usec): min=2148, max=10280, avg=5370.35, stdev=500.01 01:24:47.184 lat (usec): min=2150, max=10282, avg=5372.13, stdev=500.12 01:24:47.184 clat percentiles (usec): 01:24:47.184 | 1.00th=[ 3982], 5.00th=[ 4686], 10.00th=[ 4883], 20.00th=[ 5014], 01:24:47.184 | 30.00th=[ 5145], 40.00th=[ 5276], 50.00th=[ 5342], 60.00th=[ 5473], 01:24:47.184 | 70.00th=[ 5538], 80.00th=[ 5669], 90.00th=[ 5866], 95.00th=[ 6063], 01:24:47.184 | 99.00th=[ 6652], 99.50th=[ 7373], 99.90th=[ 8717], 99.95th=[ 9765], 01:24:47.184 | 99.99th=[10290] 01:24:47.184 bw ( KiB/s): min=44656, max=45413, per=99.95%, avg=44901.25, stdev=347.45, samples=4 01:24:47.184 iops : min=11164, max=11353, avg=11225.25, stdev=86.74, samples=4 01:24:47.184 lat (msec) : 4=0.67%, 10=99.29%, 20=0.03% 01:24:47.184 cpu : usr=76.25%, sys=18.96%, ctx=5, majf=0, minf=7 01:24:47.184 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 01:24:47.184 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:24:47.184 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:24:47.184 issued rwts: total=22641,22518,0,0 short=0,0,0,0 dropped=0,0,0,0 01:24:47.184 latency : target=0, window=0, percentile=100.00%, depth=128 01:24:47.184 01:24:47.184 Run status group 0 (all jobs): 01:24:47.184 READ: bw=44.1MiB/s (46.3MB/s), 44.1MiB/s-44.1MiB/s (46.3MB/s-46.3MB/s), io=88.4MiB (92.7MB), run=2005-2005msec 01:24:47.184 WRITE: bw=43.9MiB/s (46.0MB/s), 43.9MiB/s-43.9MiB/s (46.0MB/s-46.0MB/s), io=88.0MiB (92.2MB), run=2005-2005msec 01:24:47.184 05:19:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 01:24:47.184 05:19:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 01:24:47.184 05:19:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:24:47.184 05:19:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:24:47.184 05:19:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 01:24:47.184 05:19:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 01:24:47.184 05:19:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 01:24:47.184 05:19:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 01:24:47.184 05:19:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:24:47.184 05:19:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 01:24:47.184 05:19:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 01:24:47.184 05:19:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:24:47.184 05:19:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 01:24:47.184 05:19:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:24:47.184 05:19:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:24:47.184 05:19:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 01:24:47.184 05:19:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 01:24:47.184 05:19:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:24:47.184 05:19:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 01:24:47.184 05:19:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:24:47.184 05:19:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 01:24:47.185 05:19:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 01:24:47.185 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 01:24:47.185 fio-3.35 01:24:47.185 Starting 1 thread 01:24:49.713 01:24:49.713 test: (groupid=0, jobs=1): err= 0: pid=74855: Mon Dec 9 05:19:32 2024 01:24:49.713 read: IOPS=9486, BW=148MiB/s (155MB/s)(298MiB/2008msec) 01:24:49.713 slat (nsec): min=2405, max=99800, avg=2772.08, stdev=1651.96 01:24:49.713 clat (usec): min=1485, max=15475, avg=7911.80, stdev=1975.26 01:24:49.713 lat (usec): min=1488, max=15477, avg=7914.57, stdev=1975.39 01:24:49.713 clat percentiles (usec): 01:24:49.713 | 1.00th=[ 3490], 5.00th=[ 4555], 10.00th=[ 5342], 20.00th=[ 6325], 01:24:49.713 | 30.00th=[ 6980], 40.00th=[ 7439], 50.00th=[ 7832], 60.00th=[ 8291], 01:24:49.713 | 70.00th=[ 8848], 80.00th=[ 9634], 90.00th=[10421], 95.00th=[11207], 01:24:49.713 | 99.00th=[12649], 99.50th=[13173], 99.90th=[14222], 99.95th=[15139], 01:24:49.713 | 99.99th=[15401] 01:24:49.713 bw ( KiB/s): min=71072, max=83904, per=49.69%, avg=75416.00, stdev=5762.04, samples=4 01:24:49.713 iops : min= 4442, max= 5244, avg=4713.50, stdev=360.13, samples=4 01:24:49.713 write: IOPS=5500, BW=85.9MiB/s (90.1MB/s)(154MiB/1794msec); 0 zone resets 01:24:49.713 slat (usec): min=27, max=512, avg=30.65, stdev=10.66 01:24:49.713 clat (usec): min=1572, max=19226, avg=9924.95, stdev=2066.91 01:24:49.713 lat (usec): min=1601, max=19365, avg=9955.60, stdev=2069.77 01:24:49.713 clat percentiles (usec): 01:24:49.713 | 1.00th=[ 5932], 5.00th=[ 6849], 10.00th=[ 7439], 20.00th=[ 8094], 01:24:49.713 | 30.00th=[ 8717], 40.00th=[ 9241], 50.00th=[ 9765], 60.00th=[10290], 01:24:49.713 | 70.00th=[10945], 80.00th=[11731], 90.00th=[12780], 95.00th=[13566], 01:24:49.713 | 99.00th=[14746], 99.50th=[15926], 99.90th=[18220], 99.95th=[18744], 01:24:49.713 | 99.99th=[19268] 01:24:49.713 bw ( KiB/s): min=72704, max=86880, per=89.23%, avg=78520.00, stdev=5971.03, samples=4 01:24:49.713 iops : min= 4544, max= 5430, avg=4907.50, stdev=373.19, samples=4 01:24:49.713 lat (msec) : 2=0.13%, 4=1.61%, 10=73.06%, 20=25.20% 01:24:49.713 cpu : usr=85.10%, sys=12.26%, ctx=3, majf=0, minf=8 01:24:49.713 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 01:24:49.713 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:24:49.713 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:24:49.713 issued rwts: total=19049,9867,0,0 short=0,0,0,0 dropped=0,0,0,0 01:24:49.713 latency : target=0, window=0, percentile=100.00%, depth=128 01:24:49.713 01:24:49.713 Run status group 0 (all jobs): 01:24:49.713 READ: bw=148MiB/s (155MB/s), 148MiB/s-148MiB/s (155MB/s-155MB/s), io=298MiB (312MB), run=2008-2008msec 01:24:49.713 WRITE: bw=85.9MiB/s (90.1MB/s), 85.9MiB/s-85.9MiB/s (90.1MB/s-90.1MB/s), io=154MiB (162MB), run=1794-1794msec 01:24:49.981 05:19:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:24:49.981 05:19:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 01:24:49.981 05:19:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 01:24:49.981 05:19:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 01:24:49.981 05:19:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 01:24:49.981 05:19:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 01:24:49.981 05:19:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 01:24:50.259 05:19:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:24:50.259 05:19:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 01:24:50.259 05:19:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 01:24:50.259 05:19:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:24:50.259 rmmod nvme_tcp 01:24:50.259 rmmod nvme_fabrics 01:24:50.259 rmmod nvme_keyring 01:24:50.259 05:19:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:24:50.259 05:19:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 01:24:50.259 05:19:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 01:24:50.259 05:19:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 74734 ']' 01:24:50.259 05:19:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 74734 01:24:50.259 05:19:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 74734 ']' 01:24:50.259 05:19:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 74734 01:24:50.259 05:19:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 01:24:50.259 05:19:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:24:50.259 05:19:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74734 01:24:50.259 05:19:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:24:50.259 05:19:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:24:50.259 05:19:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74734' 01:24:50.259 killing process with pid 74734 01:24:50.259 05:19:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 74734 01:24:50.259 05:19:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 74734 01:24:50.518 05:19:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:24:50.518 05:19:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:24:50.518 05:19:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:24:50.518 05:19:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 01:24:50.518 05:19:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:24:50.518 05:19:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 01:24:50.518 05:19:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 01:24:50.518 05:19:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:24:50.518 05:19:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:24:50.518 05:19:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:24:50.518 05:19:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:24:50.518 05:19:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:24:50.518 05:19:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:24:50.776 05:19:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:24:50.776 05:19:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:24:50.776 05:19:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:24:50.776 05:19:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:24:50.776 05:19:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:24:50.776 05:19:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:24:50.776 05:19:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:24:50.776 05:19:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:24:50.776 05:19:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:24:50.776 05:19:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 01:24:50.776 05:19:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:24:50.776 05:19:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:24:50.776 05:19:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:24:50.776 05:19:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 01:24:50.776 01:24:50.776 real 0m9.397s 01:24:50.776 user 0m37.330s 01:24:50.776 sys 0m2.217s 01:24:50.776 05:19:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 01:24:50.776 05:19:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 01:24:50.776 ************************************ 01:24:50.776 END TEST nvmf_fio_host 01:24:50.776 ************************************ 01:24:51.035 05:19:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 01:24:51.035 05:19:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:24:51.035 05:19:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 01:24:51.035 05:19:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 01:24:51.035 ************************************ 01:24:51.035 START TEST nvmf_failover 01:24:51.035 ************************************ 01:24:51.035 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 01:24:51.035 * Looking for test storage... 01:24:51.035 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:24:51.035 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:24:51.035 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 01:24:51.035 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:24:51.035 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:24:51.035 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:24:51.035 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 01:24:51.035 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 01:24:51.035 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 01:24:51.035 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 01:24:51.035 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 01:24:51.035 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 01:24:51.035 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 01:24:51.035 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 01:24:51.035 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 01:24:51.035 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:24:51.035 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 01:24:51.035 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 01:24:51.035 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 01:24:51.035 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:24:51.035 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 01:24:51.035 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 01:24:51.035 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:24:51.035 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 01:24:51.035 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 01:24:51.035 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 01:24:51.035 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 01:24:51.035 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:24:51.035 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 01:24:51.035 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 01:24:51.035 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:24:51.035 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:24:51.035 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 01:24:51.035 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:24:51.035 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:24:51.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:24:51.035 --rc genhtml_branch_coverage=1 01:24:51.035 --rc genhtml_function_coverage=1 01:24:51.035 --rc genhtml_legend=1 01:24:51.035 --rc geninfo_all_blocks=1 01:24:51.035 --rc geninfo_unexecuted_blocks=1 01:24:51.035 01:24:51.035 ' 01:24:51.035 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:24:51.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:24:51.035 --rc genhtml_branch_coverage=1 01:24:51.035 --rc genhtml_function_coverage=1 01:24:51.035 --rc genhtml_legend=1 01:24:51.035 --rc geninfo_all_blocks=1 01:24:51.035 --rc geninfo_unexecuted_blocks=1 01:24:51.035 01:24:51.035 ' 01:24:51.035 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:24:51.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:24:51.035 --rc genhtml_branch_coverage=1 01:24:51.035 --rc genhtml_function_coverage=1 01:24:51.035 --rc genhtml_legend=1 01:24:51.035 --rc geninfo_all_blocks=1 01:24:51.035 --rc geninfo_unexecuted_blocks=1 01:24:51.035 01:24:51.035 ' 01:24:51.035 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:24:51.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:24:51.035 --rc genhtml_branch_coverage=1 01:24:51.035 --rc genhtml_function_coverage=1 01:24:51.035 --rc genhtml_legend=1 01:24:51.035 --rc geninfo_all_blocks=1 01:24:51.035 --rc geninfo_unexecuted_blocks=1 01:24:51.035 01:24:51.035 ' 01:24:51.035 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:24:51.035 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 01:24:51.035 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:24:51.035 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:24:51.035 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:24:51.035 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:24:51.035 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:24:51.035 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:24:51.035 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:24:51.035 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:24:51.035 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:24:51.035 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:24:51.295 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:24:51.295 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:24:51.295 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:24:51.295 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:24:51.295 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:24:51.295 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:24:51.295 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:24:51.295 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 01:24:51.295 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:24:51.295 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:24:51.295 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:24:51.295 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:24:51.295 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:24:51.295 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:24:51.295 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 01:24:51.295 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:24:51.295 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 01:24:51.295 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:24:51.295 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:24:51.295 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:24:51.295 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:24:51.295 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:24:51.295 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:24:51.295 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:24:51.295 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:24:51.295 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:24:51.295 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 01:24:51.295 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 01:24:51.295 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:24:51.295 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:24:51.295 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:24:51.295 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 01:24:51.295 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:24:51.295 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:24:51.295 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 01:24:51.295 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 01:24:51.295 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 01:24:51.295 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:24:51.295 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:24:51.295 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:24:51.295 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:24:51.295 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:24:51.295 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:24:51.295 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:24:51.295 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:24:51.295 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@460 -- # nvmf_veth_init 01:24:51.295 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:24:51.295 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:24:51.295 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:24:51.295 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:24:51.295 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:24:51.295 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:24:51.295 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:24:51.295 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:24:51.295 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:24:51.295 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:24:51.295 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:24:51.295 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:24:51.295 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:24:51.295 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:24:51.295 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:24:51.295 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:24:51.295 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:24:51.295 Cannot find device "nvmf_init_br" 01:24:51.296 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 01:24:51.296 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:24:51.296 Cannot find device "nvmf_init_br2" 01:24:51.296 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 01:24:51.296 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:24:51.296 Cannot find device "nvmf_tgt_br" 01:24:51.296 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 01:24:51.296 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:24:51.296 Cannot find device "nvmf_tgt_br2" 01:24:51.296 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 01:24:51.296 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:24:51.296 Cannot find device "nvmf_init_br" 01:24:51.296 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 01:24:51.296 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:24:51.296 Cannot find device "nvmf_init_br2" 01:24:51.296 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 01:24:51.296 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:24:51.296 Cannot find device "nvmf_tgt_br" 01:24:51.296 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 01:24:51.296 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:24:51.296 Cannot find device "nvmf_tgt_br2" 01:24:51.296 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 01:24:51.296 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:24:51.296 Cannot find device "nvmf_br" 01:24:51.296 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 01:24:51.296 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:24:51.296 Cannot find device "nvmf_init_if" 01:24:51.296 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 01:24:51.296 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:24:51.296 Cannot find device "nvmf_init_if2" 01:24:51.296 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 01:24:51.296 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:24:51.296 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:24:51.296 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 01:24:51.296 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:24:51.296 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:24:51.296 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 01:24:51.296 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:24:51.296 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:24:51.296 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:24:51.296 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:24:51.555 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:24:51.555 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:24:51.555 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:24:51.555 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:24:51.555 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:24:51.555 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:24:51.555 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:24:51.555 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:24:51.555 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:24:51.555 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:24:51.555 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:24:51.555 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:24:51.555 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:24:51.555 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:24:51.555 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:24:51.555 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:24:51.555 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:24:51.555 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:24:51.555 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:24:51.555 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:24:51.555 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:24:51.555 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:24:51.555 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:24:51.555 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:24:51.555 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:24:51.555 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:24:51.556 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:24:51.556 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:24:51.556 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:24:51.556 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:24:51.556 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 01:24:51.556 01:24:51.556 --- 10.0.0.3 ping statistics --- 01:24:51.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:24:51.556 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 01:24:51.556 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:24:51.556 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:24:51.556 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.098 ms 01:24:51.556 01:24:51.556 --- 10.0.0.4 ping statistics --- 01:24:51.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:24:51.556 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 01:24:51.556 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:24:51.556 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:24:51.556 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 01:24:51.556 01:24:51.556 --- 10.0.0.1 ping statistics --- 01:24:51.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:24:51.556 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 01:24:51.556 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:24:51.556 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:24:51.556 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 01:24:51.556 01:24:51.556 --- 10.0.0.2 ping statistics --- 01:24:51.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:24:51.556 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 01:24:51.556 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:24:51.556 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@461 -- # return 0 01:24:51.556 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:24:51.556 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:24:51.556 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:24:51.556 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:24:51.556 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:24:51.556 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:24:51.556 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:24:51.556 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 01:24:51.556 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:24:51.556 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 01:24:51.556 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 01:24:51.556 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=75127 01:24:51.556 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 01:24:51.556 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 75127 01:24:51.556 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 75127 ']' 01:24:51.556 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:24:51.556 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 01:24:51.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:24:51.556 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:24:51.556 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 01:24:51.556 05:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 01:24:51.556 [2024-12-09 05:19:33.995026] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:24:51.556 [2024-12-09 05:19:33.995083] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:24:51.815 [2024-12-09 05:19:34.147623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 01:24:51.815 [2024-12-09 05:19:34.216537] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:24:51.815 [2024-12-09 05:19:34.216587] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:24:51.815 [2024-12-09 05:19:34.216593] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:24:51.815 [2024-12-09 05:19:34.216598] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:24:51.815 [2024-12-09 05:19:34.216602] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:24:51.815 [2024-12-09 05:19:34.217930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:24:51.815 [2024-12-09 05:19:34.218064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:24:51.815 [2024-12-09 05:19:34.218062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:24:52.073 [2024-12-09 05:19:34.293502] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:24:52.646 05:19:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:24:52.647 05:19:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 01:24:52.647 05:19:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:24:52.647 05:19:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 01:24:52.647 05:19:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 01:24:52.647 05:19:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:24:52.647 05:19:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 01:24:52.647 [2024-12-09 05:19:35.079847] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:24:52.647 05:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 01:24:52.906 Malloc0 01:24:52.906 05:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:24:53.165 05:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:24:53.424 05:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:24:53.424 [2024-12-09 05:19:35.869180] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:24:53.682 05:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 01:24:53.682 [2024-12-09 05:19:36.076978] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 01:24:53.682 05:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 01:24:53.941 [2024-12-09 05:19:36.264758] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 01:24:53.941 05:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 01:24:53.941 05:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=75179 01:24:53.941 05:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:24:53.941 05:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 75179 /var/tmp/bdevperf.sock 01:24:53.941 05:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 75179 ']' 01:24:53.941 05:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:24:53.941 05:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 01:24:53.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:24:53.941 05:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:24:53.941 05:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 01:24:53.941 05:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 01:24:54.200 05:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:24:54.200 05:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 01:24:54.200 05:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 01:24:54.459 NVMe0n1 01:24:54.459 05:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 01:24:54.718 01:24:54.718 05:19:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:24:54.718 05:19:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=75195 01:24:54.718 05:19:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 01:24:56.095 05:19:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:24:56.095 [2024-12-09 05:19:38.320131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.095 [2024-12-09 05:19:38.320186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.095 [2024-12-09 05:19:38.320193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.095 [2024-12-09 05:19:38.320198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.095 [2024-12-09 05:19:38.320203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.095 [2024-12-09 05:19:38.320207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.095 [2024-12-09 05:19:38.320212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.095 [2024-12-09 05:19:38.320216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.095 [2024-12-09 05:19:38.320221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.095 [2024-12-09 05:19:38.320226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.095 [2024-12-09 05:19:38.320230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.095 [2024-12-09 05:19:38.320235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.095 [2024-12-09 05:19:38.320240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.095 [2024-12-09 05:19:38.320244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.095 [2024-12-09 05:19:38.320248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.095 [2024-12-09 05:19:38.320252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.095 [2024-12-09 05:19:38.320257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.095 [2024-12-09 05:19:38.320261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.095 [2024-12-09 05:19:38.320268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.095 [2024-12-09 05:19:38.320273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.095 [2024-12-09 05:19:38.320277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.095 [2024-12-09 05:19:38.320293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.095 [2024-12-09 05:19:38.320301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.095 [2024-12-09 05:19:38.320308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.095 [2024-12-09 05:19:38.320313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.095 [2024-12-09 05:19:38.320318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.095 [2024-12-09 05:19:38.320330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.095 [2024-12-09 05:19:38.320334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.095 [2024-12-09 05:19:38.320339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.095 [2024-12-09 05:19:38.320344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.095 [2024-12-09 05:19:38.320348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.095 [2024-12-09 05:19:38.320353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.095 [2024-12-09 05:19:38.320357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.095 [2024-12-09 05:19:38.320364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.095 [2024-12-09 05:19:38.320369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.095 [2024-12-09 05:19:38.320373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.095 [2024-12-09 05:19:38.320377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.095 [2024-12-09 05:19:38.320382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.095 [2024-12-09 05:19:38.320387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.095 [2024-12-09 05:19:38.320392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.095 [2024-12-09 05:19:38.320396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.095 [2024-12-09 05:19:38.320400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.095 [2024-12-09 05:19:38.320405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.095 [2024-12-09 05:19:38.320409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.095 [2024-12-09 05:19:38.320413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.095 [2024-12-09 05:19:38.320417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.095 [2024-12-09 05:19:38.320422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.095 [2024-12-09 05:19:38.320426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.095 [2024-12-09 05:19:38.320430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.095 [2024-12-09 05:19:38.320434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.095 [2024-12-09 05:19:38.320439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.095 [2024-12-09 05:19:38.320442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.095 [2024-12-09 05:19:38.320447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.095 [2024-12-09 05:19:38.320451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.095 [2024-12-09 05:19:38.320455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.095 [2024-12-09 05:19:38.320460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.095 [2024-12-09 05:19:38.320464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.095 [2024-12-09 05:19:38.320468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.095 [2024-12-09 05:19:38.320472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.095 [2024-12-09 05:19:38.320477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.095 [2024-12-09 05:19:38.320481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.096 [2024-12-09 05:19:38.320485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.096 [2024-12-09 05:19:38.320490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.096 [2024-12-09 05:19:38.320494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.096 [2024-12-09 05:19:38.320498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.096 [2024-12-09 05:19:38.320503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.096 [2024-12-09 05:19:38.320508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.096 [2024-12-09 05:19:38.320512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.096 [2024-12-09 05:19:38.320516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.096 [2024-12-09 05:19:38.320521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.096 [2024-12-09 05:19:38.320525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.096 [2024-12-09 05:19:38.320529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.096 [2024-12-09 05:19:38.320533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.096 [2024-12-09 05:19:38.320537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.096 [2024-12-09 05:19:38.320542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.096 [2024-12-09 05:19:38.320546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.096 [2024-12-09 05:19:38.320550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.096 [2024-12-09 05:19:38.320554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.096 [2024-12-09 05:19:38.320558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.096 [2024-12-09 05:19:38.320563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.096 [2024-12-09 05:19:38.320567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.096 [2024-12-09 05:19:38.320571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.096 [2024-12-09 05:19:38.320575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.096 [2024-12-09 05:19:38.320580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.096 [2024-12-09 05:19:38.320588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.096 [2024-12-09 05:19:38.320594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.096 [2024-12-09 05:19:38.320599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.096 [2024-12-09 05:19:38.320603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.096 [2024-12-09 05:19:38.320608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.096 [2024-12-09 05:19:38.320612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.096 [2024-12-09 05:19:38.320617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.096 [2024-12-09 05:19:38.320621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.096 [2024-12-09 05:19:38.320626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.096 [2024-12-09 05:19:38.320630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.096 [2024-12-09 05:19:38.320634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.096 [2024-12-09 05:19:38.320639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.096 [2024-12-09 05:19:38.320643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.096 [2024-12-09 05:19:38.320647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.096 [2024-12-09 05:19:38.320651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.096 [2024-12-09 05:19:38.320655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.096 [2024-12-09 05:19:38.320660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.096 [2024-12-09 05:19:38.320664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.096 [2024-12-09 05:19:38.320670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.096 [2024-12-09 05:19:38.320674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.096 [2024-12-09 05:19:38.320678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.096 [2024-12-09 05:19:38.320682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.096 [2024-12-09 05:19:38.320687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.096 [2024-12-09 05:19:38.320691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.096 [2024-12-09 05:19:38.320695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.096 [2024-12-09 05:19:38.320699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.096 [2024-12-09 05:19:38.320704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.096 [2024-12-09 05:19:38.320709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.096 [2024-12-09 05:19:38.320717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.096 [2024-12-09 05:19:38.320724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.096 [2024-12-09 05:19:38.320729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.096 [2024-12-09 05:19:38.320734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.096 [2024-12-09 05:19:38.320739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.096 [2024-12-09 05:19:38.320743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.096 [2024-12-09 05:19:38.320748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236d30 is same with the state(6) to be set 01:24:56.096 05:19:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 01:24:59.383 05:19:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 01:24:59.383 01:24:59.383 05:19:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 01:24:59.383 05:19:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 01:25:02.680 05:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:25:02.680 [2024-12-09 05:19:45.028791] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:25:02.680 05:19:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 01:25:03.617 05:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 01:25:03.876 [2024-12-09 05:19:46.238648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235660 is same with the state(6) to be set 01:25:03.876 [2024-12-09 05:19:46.238707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235660 is same with the state(6) to be set 01:25:03.876 05:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 75195 01:25:10.458 { 01:25:10.458 "results": [ 01:25:10.458 { 01:25:10.458 "job": "NVMe0n1", 01:25:10.458 "core_mask": "0x1", 01:25:10.458 "workload": "verify", 01:25:10.458 "status": "finished", 01:25:10.458 "verify_range": { 01:25:10.458 "start": 0, 01:25:10.458 "length": 16384 01:25:10.458 }, 01:25:10.458 "queue_depth": 128, 01:25:10.458 "io_size": 4096, 01:25:10.458 "runtime": 15.009127, 01:25:10.458 "iops": 8625.351760965177, 01:25:10.458 "mibps": 33.69278031627022, 01:25:10.458 "io_failed": 4333, 01:25:10.458 "io_timeout": 0, 01:25:10.458 "avg_latency_us": 14334.13704959742, 01:25:10.458 "min_latency_us": 457.8934497816594, 01:25:10.458 "max_latency_us": 16598.63755458515 01:25:10.458 } 01:25:10.458 ], 01:25:10.458 "core_count": 1 01:25:10.458 } 01:25:10.458 05:19:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 75179 01:25:10.458 05:19:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 75179 ']' 01:25:10.458 05:19:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 75179 01:25:10.458 05:19:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 01:25:10.458 05:19:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:25:10.458 05:19:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75179 01:25:10.458 killing process with pid 75179 01:25:10.458 05:19:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:25:10.458 05:19:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:25:10.458 05:19:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75179' 01:25:10.458 05:19:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 75179 01:25:10.458 05:19:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 75179 01:25:10.458 05:19:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 01:25:10.458 [2024-12-09 05:19:36.309630] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:25:10.458 [2024-12-09 05:19:36.309702] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75179 ] 01:25:10.459 [2024-12-09 05:19:36.442260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:25:10.459 [2024-12-09 05:19:36.488119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:25:10.459 [2024-12-09 05:19:36.528423] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:25:10.459 Running I/O for 15 seconds... 01:25:10.459 8950.00 IOPS, 34.96 MiB/s [2024-12-09T05:19:52.915Z] [2024-12-09 05:19:38.320797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:78944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.459 [2024-12-09 05:19:38.320848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.459 [2024-12-09 05:19:38.320869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:78952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.459 [2024-12-09 05:19:38.320878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.459 [2024-12-09 05:19:38.320889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:78960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.459 [2024-12-09 05:19:38.320897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.459 [2024-12-09 05:19:38.320907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:78968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.459 [2024-12-09 05:19:38.320916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.459 [2024-12-09 05:19:38.320925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:78976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.459 [2024-12-09 05:19:38.320934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.459 [2024-12-09 05:19:38.320943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:78984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.459 [2024-12-09 05:19:38.320955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.459 [2024-12-09 05:19:38.320965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:78992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.459 [2024-12-09 05:19:38.320972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.459 [2024-12-09 05:19:38.320982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:79000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.459 [2024-12-09 05:19:38.320990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.459 [2024-12-09 05:19:38.321000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:79008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.459 [2024-12-09 05:19:38.321008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.459 [2024-12-09 05:19:38.321019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:79016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.459 [2024-12-09 05:19:38.321027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.459 [2024-12-09 05:19:38.321036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:79024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.459 [2024-12-09 05:19:38.321069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.459 [2024-12-09 05:19:38.321079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:79032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.459 [2024-12-09 05:19:38.321088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.459 [2024-12-09 05:19:38.321098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:79040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.459 [2024-12-09 05:19:38.321106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.459 [2024-12-09 05:19:38.321116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:79048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.459 [2024-12-09 05:19:38.321124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.459 [2024-12-09 05:19:38.321133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.459 [2024-12-09 05:19:38.321141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.459 [2024-12-09 05:19:38.321151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:79064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.459 [2024-12-09 05:19:38.321164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.459 [2024-12-09 05:19:38.321174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:79072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.459 [2024-12-09 05:19:38.321182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.459 [2024-12-09 05:19:38.321191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.459 [2024-12-09 05:19:38.321199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.459 [2024-12-09 05:19:38.321209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:79088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.459 [2024-12-09 05:19:38.321217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.459 [2024-12-09 05:19:38.321227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:79096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.459 [2024-12-09 05:19:38.321234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.459 [2024-12-09 05:19:38.321244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:79104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.459 [2024-12-09 05:19:38.321253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.459 [2024-12-09 05:19:38.321263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:79112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.459 [2024-12-09 05:19:38.321271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.459 [2024-12-09 05:19:38.321281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:79120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.459 [2024-12-09 05:19:38.321290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.459 [2024-12-09 05:19:38.321306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:79128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.459 [2024-12-09 05:19:38.321315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.459 [2024-12-09 05:19:38.321334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:79136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.459 [2024-12-09 05:19:38.321343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.459 [2024-12-09 05:19:38.321353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:79144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.459 [2024-12-09 05:19:38.321361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.459 [2024-12-09 05:19:38.321371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:79152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.459 [2024-12-09 05:19:38.321379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.459 [2024-12-09 05:19:38.321389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:79160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.459 [2024-12-09 05:19:38.321397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.459 [2024-12-09 05:19:38.321406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:79168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.459 [2024-12-09 05:19:38.321414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.459 [2024-12-09 05:19:38.321424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:79176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.459 [2024-12-09 05:19:38.321432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.459 [2024-12-09 05:19:38.321442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:79184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.459 [2024-12-09 05:19:38.321450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.459 [2024-12-09 05:19:38.321459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:79192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.459 [2024-12-09 05:19:38.321469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.459 [2024-12-09 05:19:38.321480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:79200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.459 [2024-12-09 05:19:38.321488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.459 [2024-12-09 05:19:38.321498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:79208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.459 [2024-12-09 05:19:38.321506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.459 [2024-12-09 05:19:38.321516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:79216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.459 [2024-12-09 05:19:38.321524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.460 [2024-12-09 05:19:38.321534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:79224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.460 [2024-12-09 05:19:38.321547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.460 [2024-12-09 05:19:38.321557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:79232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.460 [2024-12-09 05:19:38.321565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.460 [2024-12-09 05:19:38.321579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:79240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.460 [2024-12-09 05:19:38.321588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.460 [2024-12-09 05:19:38.321597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:79248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.460 [2024-12-09 05:19:38.321606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.460 [2024-12-09 05:19:38.321615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:79256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.460 [2024-12-09 05:19:38.321624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.460 [2024-12-09 05:19:38.321633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:79264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.460 [2024-12-09 05:19:38.321641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.460 [2024-12-09 05:19:38.321651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:79272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.460 [2024-12-09 05:19:38.321659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.460 [2024-12-09 05:19:38.321668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:79280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.460 [2024-12-09 05:19:38.321677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.460 [2024-12-09 05:19:38.321686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:79288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.460 [2024-12-09 05:19:38.321694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.460 [2024-12-09 05:19:38.321704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:79296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.460 [2024-12-09 05:19:38.321712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.460 [2024-12-09 05:19:38.321722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:79304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.460 [2024-12-09 05:19:38.321729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.460 [2024-12-09 05:19:38.321739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:79312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.460 [2024-12-09 05:19:38.321747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.460 [2024-12-09 05:19:38.321757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:79320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.460 [2024-12-09 05:19:38.321767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.460 [2024-12-09 05:19:38.321776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:79328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.460 [2024-12-09 05:19:38.321788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.460 [2024-12-09 05:19:38.321798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:79336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.460 [2024-12-09 05:19:38.321806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.460 [2024-12-09 05:19:38.321816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:79344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.460 [2024-12-09 05:19:38.321825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.460 [2024-12-09 05:19:38.321834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:79352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.460 [2024-12-09 05:19:38.321842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.460 [2024-12-09 05:19:38.321852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:79360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.460 [2024-12-09 05:19:38.321861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.460 [2024-12-09 05:19:38.321872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:79368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.460 [2024-12-09 05:19:38.321881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.460 [2024-12-09 05:19:38.321890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:79376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.460 [2024-12-09 05:19:38.321898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.460 [2024-12-09 05:19:38.321908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.460 [2024-12-09 05:19:38.321917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.460 [2024-12-09 05:19:38.321926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:79392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.460 [2024-12-09 05:19:38.321934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.460 [2024-12-09 05:19:38.321944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:79400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.460 [2024-12-09 05:19:38.321952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.460 [2024-12-09 05:19:38.321961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.460 [2024-12-09 05:19:38.321969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.460 [2024-12-09 05:19:38.321979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:79416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.460 [2024-12-09 05:19:38.321987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.460 [2024-12-09 05:19:38.321996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:79424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.460 [2024-12-09 05:19:38.322005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.460 [2024-12-09 05:19:38.322019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.460 [2024-12-09 05:19:38.322028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.460 [2024-12-09 05:19:38.322037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:79440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.460 [2024-12-09 05:19:38.322045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.460 [2024-12-09 05:19:38.322055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:79448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.460 [2024-12-09 05:19:38.322065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.460 [2024-12-09 05:19:38.322074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:79456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.460 [2024-12-09 05:19:38.322082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.460 [2024-12-09 05:19:38.322092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:79464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.460 [2024-12-09 05:19:38.322101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.460 [2024-12-09 05:19:38.322111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:79472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.460 [2024-12-09 05:19:38.322119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.460 [2024-12-09 05:19:38.322129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:79480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.460 [2024-12-09 05:19:38.322137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.460 [2024-12-09 05:19:38.322147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:79488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.460 [2024-12-09 05:19:38.322155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.460 [2024-12-09 05:19:38.322166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:79496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.460 [2024-12-09 05:19:38.322174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.460 [2024-12-09 05:19:38.322185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:79504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.460 [2024-12-09 05:19:38.322193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.460 [2024-12-09 05:19:38.322203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:79512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.460 [2024-12-09 05:19:38.322211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.460 [2024-12-09 05:19:38.322221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:79520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.460 [2024-12-09 05:19:38.322229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.460 [2024-12-09 05:19:38.322239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:79528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.461 [2024-12-09 05:19:38.322251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.461 [2024-12-09 05:19:38.322261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:79536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.461 [2024-12-09 05:19:38.322269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.461 [2024-12-09 05:19:38.322278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:79544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.461 [2024-12-09 05:19:38.322286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.461 [2024-12-09 05:19:38.322296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:79552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.461 [2024-12-09 05:19:38.322304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.461 [2024-12-09 05:19:38.322313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:79560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.461 [2024-12-09 05:19:38.322321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.461 [2024-12-09 05:19:38.322339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:79568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.461 [2024-12-09 05:19:38.322347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.461 [2024-12-09 05:19:38.322358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.461 [2024-12-09 05:19:38.322391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.461 [2024-12-09 05:19:38.322402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:79584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.461 [2024-12-09 05:19:38.322410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.461 [2024-12-09 05:19:38.322420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:79592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.461 [2024-12-09 05:19:38.322429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.461 [2024-12-09 05:19:38.322439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.461 [2024-12-09 05:19:38.322447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.461 [2024-12-09 05:19:38.322458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:79608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.461 [2024-12-09 05:19:38.322466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.461 [2024-12-09 05:19:38.322476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:79616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.461 [2024-12-09 05:19:38.322485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.461 [2024-12-09 05:19:38.322497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:79624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.461 [2024-12-09 05:19:38.322505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.461 [2024-12-09 05:19:38.322521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:79632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.461 [2024-12-09 05:19:38.322530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.461 [2024-12-09 05:19:38.322540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:79640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.461 [2024-12-09 05:19:38.322549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.461 [2024-12-09 05:19:38.322559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:79648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.461 [2024-12-09 05:19:38.322568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.461 [2024-12-09 05:19:38.322578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:79656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.461 [2024-12-09 05:19:38.322587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.461 [2024-12-09 05:19:38.322597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:79664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.461 [2024-12-09 05:19:38.322606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.461 [2024-12-09 05:19:38.322617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:79672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.461 [2024-12-09 05:19:38.322625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.461 [2024-12-09 05:19:38.322636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:79680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.461 [2024-12-09 05:19:38.322645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.461 [2024-12-09 05:19:38.322655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:79688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.461 [2024-12-09 05:19:38.322664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.461 [2024-12-09 05:19:38.322674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:79696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.461 [2024-12-09 05:19:38.322683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.461 [2024-12-09 05:19:38.322693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:79704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.461 [2024-12-09 05:19:38.322704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.461 [2024-12-09 05:19:38.322714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:79712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.461 [2024-12-09 05:19:38.322723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.461 [2024-12-09 05:19:38.322733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:79720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.461 [2024-12-09 05:19:38.322741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.461 [2024-12-09 05:19:38.322751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:79728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.461 [2024-12-09 05:19:38.322764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.461 [2024-12-09 05:19:38.322775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.461 [2024-12-09 05:19:38.322783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.461 [2024-12-09 05:19:38.322794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:79744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.461 [2024-12-09 05:19:38.322802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.461 [2024-12-09 05:19:38.322814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:79752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.461 [2024-12-09 05:19:38.322823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.461 [2024-12-09 05:19:38.322833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:79760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.461 [2024-12-09 05:19:38.322842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.461 [2024-12-09 05:19:38.322852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:79768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.461 [2024-12-09 05:19:38.322861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.461 [2024-12-09 05:19:38.322871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:79776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.461 [2024-12-09 05:19:38.322879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.461 [2024-12-09 05:19:38.322890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.461 [2024-12-09 05:19:38.322898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.461 [2024-12-09 05:19:38.322908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:79792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.461 [2024-12-09 05:19:38.322917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.462 [2024-12-09 05:19:38.322928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:79800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.462 [2024-12-09 05:19:38.322936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.462 [2024-12-09 05:19:38.322946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:79808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.462 [2024-12-09 05:19:38.322955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.462 [2024-12-09 05:19:38.322965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:79816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.462 [2024-12-09 05:19:38.322974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.462 [2024-12-09 05:19:38.322984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.462 [2024-12-09 05:19:38.322992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.462 [2024-12-09 05:19:38.323003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:79832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.462 [2024-12-09 05:19:38.323017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.462 [2024-12-09 05:19:38.323027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:79840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.462 [2024-12-09 05:19:38.323036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.462 [2024-12-09 05:19:38.323046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:79848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.462 [2024-12-09 05:19:38.323054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.462 [2024-12-09 05:19:38.323065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:79856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.462 [2024-12-09 05:19:38.323073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.462 [2024-12-09 05:19:38.323084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:79864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.462 [2024-12-09 05:19:38.323102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.462 [2024-12-09 05:19:38.323113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:79872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.462 [2024-12-09 05:19:38.323121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.462 [2024-12-09 05:19:38.323132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.462 [2024-12-09 05:19:38.323141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.462 [2024-12-09 05:19:38.323151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.462 [2024-12-09 05:19:38.323160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.462 [2024-12-09 05:19:38.323170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:79896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.462 [2024-12-09 05:19:38.323178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.462 [2024-12-09 05:19:38.323189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.462 [2024-12-09 05:19:38.323198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.462 [2024-12-09 05:19:38.323208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:79912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.462 [2024-12-09 05:19:38.323217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.462 [2024-12-09 05:19:38.323227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:79920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.462 [2024-12-09 05:19:38.323236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.462 [2024-12-09 05:19:38.323246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:79928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.462 [2024-12-09 05:19:38.323255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.462 [2024-12-09 05:19:38.323269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:79936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.462 [2024-12-09 05:19:38.323278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.462 [2024-12-09 05:19:38.323288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:79944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.462 [2024-12-09 05:19:38.323297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.462 [2024-12-09 05:19:38.323307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:79952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.462 [2024-12-09 05:19:38.323316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.462 [2024-12-09 05:19:38.323334] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63e00 is same with the state(6) to be set 01:25:10.462 [2024-12-09 05:19:38.323347] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:25:10.462 [2024-12-09 05:19:38.323354] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:25:10.462 [2024-12-09 05:19:38.323361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79960 len:8 PRP1 0x0 PRP2 0x0 01:25:10.462 [2024-12-09 05:19:38.323370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.462 [2024-12-09 05:19:38.323426] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 01:25:10.462 [2024-12-09 05:19:38.323472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:25:10.462 [2024-12-09 05:19:38.323484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.462 [2024-12-09 05:19:38.323494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:25:10.462 [2024-12-09 05:19:38.323503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.462 [2024-12-09 05:19:38.323512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:25:10.462 [2024-12-09 05:19:38.323521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.462 [2024-12-09 05:19:38.323533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:25:10.462 [2024-12-09 05:19:38.323542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.462 [2024-12-09 05:19:38.323551] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 01:25:10.462 [2024-12-09 05:19:38.323580] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaf4c60 (9): Bad file descriptor 01:25:10.462 [2024-12-09 05:19:38.326407] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 01:25:10.462 [2024-12-09 05:19:38.350614] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 01:25:10.462 8581.50 IOPS, 33.52 MiB/s [2024-12-09T05:19:52.918Z] 8513.00 IOPS, 33.25 MiB/s [2024-12-09T05:19:52.918Z] 8516.75 IOPS, 33.27 MiB/s [2024-12-09T05:19:52.918Z] [2024-12-09 05:19:41.803110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:63944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.462 [2024-12-09 05:19:41.803182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.462 [2024-12-09 05:19:41.803230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:63952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.462 [2024-12-09 05:19:41.803240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.463 [2024-12-09 05:19:41.803251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:63960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.463 [2024-12-09 05:19:41.803260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.463 [2024-12-09 05:19:41.803271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:63968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.463 [2024-12-09 05:19:41.803280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.463 [2024-12-09 05:19:41.803290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:63976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.463 [2024-12-09 05:19:41.803299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.463 [2024-12-09 05:19:41.803309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:63984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.463 [2024-12-09 05:19:41.803318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.463 [2024-12-09 05:19:41.803339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:63992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.463 [2024-12-09 05:19:41.803348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.463 [2024-12-09 05:19:41.803359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:64000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.463 [2024-12-09 05:19:41.803368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.463 [2024-12-09 05:19:41.803378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:63560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.463 [2024-12-09 05:19:41.803388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.463 [2024-12-09 05:19:41.803399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:63568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.463 [2024-12-09 05:19:41.803408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.463 [2024-12-09 05:19:41.803418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:63576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.463 [2024-12-09 05:19:41.803427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.463 [2024-12-09 05:19:41.803437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:63584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.463 [2024-12-09 05:19:41.803446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.463 [2024-12-09 05:19:41.803456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:63592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.463 [2024-12-09 05:19:41.803464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.463 [2024-12-09 05:19:41.803474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:63600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.463 [2024-12-09 05:19:41.803489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.463 [2024-12-09 05:19:41.803499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:63608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.463 [2024-12-09 05:19:41.803508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.463 [2024-12-09 05:19:41.803518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:63616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.463 [2024-12-09 05:19:41.803527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.463 [2024-12-09 05:19:41.803537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:64008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.463 [2024-12-09 05:19:41.803546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.463 [2024-12-09 05:19:41.803557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:64016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.463 [2024-12-09 05:19:41.803566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.463 [2024-12-09 05:19:41.803576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:64024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.463 [2024-12-09 05:19:41.803585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.463 [2024-12-09 05:19:41.803595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:64032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.463 [2024-12-09 05:19:41.803604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.463 [2024-12-09 05:19:41.803614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:64040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.463 [2024-12-09 05:19:41.803623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.463 [2024-12-09 05:19:41.803633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:64048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.463 [2024-12-09 05:19:41.803643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.463 [2024-12-09 05:19:41.803652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:64056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.463 [2024-12-09 05:19:41.803661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.463 [2024-12-09 05:19:41.803672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:64064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.463 [2024-12-09 05:19:41.803681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.463 [2024-12-09 05:19:41.803691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:64072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.463 [2024-12-09 05:19:41.803700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.463 [2024-12-09 05:19:41.803710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:64080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.463 [2024-12-09 05:19:41.803719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.463 [2024-12-09 05:19:41.803734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:64088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.463 [2024-12-09 05:19:41.803744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.463 [2024-12-09 05:19:41.803754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:64096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.463 [2024-12-09 05:19:41.803763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.463 [2024-12-09 05:19:41.803773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:64104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.463 [2024-12-09 05:19:41.803782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.463 [2024-12-09 05:19:41.803792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:64112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.463 [2024-12-09 05:19:41.803801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.463 [2024-12-09 05:19:41.803811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:64120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.464 [2024-12-09 05:19:41.803820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.464 [2024-12-09 05:19:41.803831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:64128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.464 [2024-12-09 05:19:41.803840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.464 [2024-12-09 05:19:41.803851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:63624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.464 [2024-12-09 05:19:41.803859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.464 [2024-12-09 05:19:41.803870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:63632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.464 [2024-12-09 05:19:41.803878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.464 [2024-12-09 05:19:41.803889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:63640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.464 [2024-12-09 05:19:41.803898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.464 [2024-12-09 05:19:41.803908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:63648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.464 [2024-12-09 05:19:41.803917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.464 [2024-12-09 05:19:41.803927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:63656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.464 [2024-12-09 05:19:41.803936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.464 [2024-12-09 05:19:41.803947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:63664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.464 [2024-12-09 05:19:41.803956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.464 [2024-12-09 05:19:41.803966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:63672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.464 [2024-12-09 05:19:41.803979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.464 [2024-12-09 05:19:41.803990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:63680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.464 [2024-12-09 05:19:41.803999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.464 [2024-12-09 05:19:41.804010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:64136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.464 [2024-12-09 05:19:41.804019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.464 [2024-12-09 05:19:41.804030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:64144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.464 [2024-12-09 05:19:41.804039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.464 [2024-12-09 05:19:41.804049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:64152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.464 [2024-12-09 05:19:41.804058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.464 [2024-12-09 05:19:41.804068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:64160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.464 [2024-12-09 05:19:41.804077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.464 [2024-12-09 05:19:41.804087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:64168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.464 [2024-12-09 05:19:41.804096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.464 [2024-12-09 05:19:41.804106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:64176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.464 [2024-12-09 05:19:41.804115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.464 [2024-12-09 05:19:41.804125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:64184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.464 [2024-12-09 05:19:41.804134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.464 [2024-12-09 05:19:41.804144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:64192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.464 [2024-12-09 05:19:41.804152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.464 [2024-12-09 05:19:41.804163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:63688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.464 [2024-12-09 05:19:41.804171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.464 [2024-12-09 05:19:41.804182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:63696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.464 [2024-12-09 05:19:41.804190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.464 [2024-12-09 05:19:41.804200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:63704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.464 [2024-12-09 05:19:41.804209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.464 [2024-12-09 05:19:41.804219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:63712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.464 [2024-12-09 05:19:41.804232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.464 [2024-12-09 05:19:41.804242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:63720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.464 [2024-12-09 05:19:41.804251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.464 [2024-12-09 05:19:41.804262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:63728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.464 [2024-12-09 05:19:41.804271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.464 [2024-12-09 05:19:41.804282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:63736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.464 [2024-12-09 05:19:41.804291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.464 [2024-12-09 05:19:41.804301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:63744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.464 [2024-12-09 05:19:41.804310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.464 [2024-12-09 05:19:41.804320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:64200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.464 [2024-12-09 05:19:41.804338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.464 [2024-12-09 05:19:41.804349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:64208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.464 [2024-12-09 05:19:41.804358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.464 [2024-12-09 05:19:41.804368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:64216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.464 [2024-12-09 05:19:41.804377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.464 [2024-12-09 05:19:41.804387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.464 [2024-12-09 05:19:41.804396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.464 [2024-12-09 05:19:41.804406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:64232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.464 [2024-12-09 05:19:41.804415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.464 [2024-12-09 05:19:41.804425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:64240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.464 [2024-12-09 05:19:41.804434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.464 [2024-12-09 05:19:41.804444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:64248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.464 [2024-12-09 05:19:41.804453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.464 [2024-12-09 05:19:41.804464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:64256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.464 [2024-12-09 05:19:41.804472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.465 [2024-12-09 05:19:41.804487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:64264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.465 [2024-12-09 05:19:41.804497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.465 [2024-12-09 05:19:41.804508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:64272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.465 [2024-12-09 05:19:41.804517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.465 [2024-12-09 05:19:41.804527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:64280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.465 [2024-12-09 05:19:41.804535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.465 [2024-12-09 05:19:41.804546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:64288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.465 [2024-12-09 05:19:41.804555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.465 [2024-12-09 05:19:41.804565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:64296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.465 [2024-12-09 05:19:41.804574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.465 [2024-12-09 05:19:41.804584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:64304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.465 [2024-12-09 05:19:41.804593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.465 [2024-12-09 05:19:41.804603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:64312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.465 [2024-12-09 05:19:41.804611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.465 [2024-12-09 05:19:41.804622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:64320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.465 [2024-12-09 05:19:41.804630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.465 [2024-12-09 05:19:41.804641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:63752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.465 [2024-12-09 05:19:41.804651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.465 [2024-12-09 05:19:41.804661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:63760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.465 [2024-12-09 05:19:41.804670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.465 [2024-12-09 05:19:41.804681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:63768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.465 [2024-12-09 05:19:41.804689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.465 [2024-12-09 05:19:41.804700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:63776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.465 [2024-12-09 05:19:41.804708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.465 [2024-12-09 05:19:41.804719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:63784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.465 [2024-12-09 05:19:41.804732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.465 [2024-12-09 05:19:41.804742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:63792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.465 [2024-12-09 05:19:41.804751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.465 [2024-12-09 05:19:41.804761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:63800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.465 [2024-12-09 05:19:41.804769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.465 [2024-12-09 05:19:41.804779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:63808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.465 [2024-12-09 05:19:41.804788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.465 [2024-12-09 05:19:41.804798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:64328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.465 [2024-12-09 05:19:41.804807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.465 [2024-12-09 05:19:41.804817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:64336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.465 [2024-12-09 05:19:41.804826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.465 [2024-12-09 05:19:41.804836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:64344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.465 [2024-12-09 05:19:41.804845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.465 [2024-12-09 05:19:41.804854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:64352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.465 [2024-12-09 05:19:41.804863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.465 [2024-12-09 05:19:41.804873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:64360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.465 [2024-12-09 05:19:41.804882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.465 [2024-12-09 05:19:41.804892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:64368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.465 [2024-12-09 05:19:41.804901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.465 [2024-12-09 05:19:41.804911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:64376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.465 [2024-12-09 05:19:41.804920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.465 [2024-12-09 05:19:41.804930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:64384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.465 [2024-12-09 05:19:41.804938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.465 [2024-12-09 05:19:41.804949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:63816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.465 [2024-12-09 05:19:41.804957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.465 [2024-12-09 05:19:41.804972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:63824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.465 [2024-12-09 05:19:41.804981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.465 [2024-12-09 05:19:41.804991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:63832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.465 [2024-12-09 05:19:41.805000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.465 [2024-12-09 05:19:41.805010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:63840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.465 [2024-12-09 05:19:41.805018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.465 [2024-12-09 05:19:41.805029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:63848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.465 [2024-12-09 05:19:41.805037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.465 [2024-12-09 05:19:41.805048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:63856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.465 [2024-12-09 05:19:41.805056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.465 [2024-12-09 05:19:41.805066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:63864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.465 [2024-12-09 05:19:41.805075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.465 [2024-12-09 05:19:41.805085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:63872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.465 [2024-12-09 05:19:41.805094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.465 [2024-12-09 05:19:41.805104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:64392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.465 [2024-12-09 05:19:41.805113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.465 [2024-12-09 05:19:41.805123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:64400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.465 [2024-12-09 05:19:41.805131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.465 [2024-12-09 05:19:41.805141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:64408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.465 [2024-12-09 05:19:41.805150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.465 [2024-12-09 05:19:41.805160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:64416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.465 [2024-12-09 05:19:41.805168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.465 [2024-12-09 05:19:41.805178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:64424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.465 [2024-12-09 05:19:41.805187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.466 [2024-12-09 05:19:41.805198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:64432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.466 [2024-12-09 05:19:41.805207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.466 [2024-12-09 05:19:41.805221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:64440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.466 [2024-12-09 05:19:41.805230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.466 [2024-12-09 05:19:41.805240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.466 [2024-12-09 05:19:41.805248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.466 [2024-12-09 05:19:41.805258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:64456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.466 [2024-12-09 05:19:41.805267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.466 [2024-12-09 05:19:41.805277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:64464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.466 [2024-12-09 05:19:41.805286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.466 [2024-12-09 05:19:41.805296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:64472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.466 [2024-12-09 05:19:41.805305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.466 [2024-12-09 05:19:41.805315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:64480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.466 [2024-12-09 05:19:41.805330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.466 [2024-12-09 05:19:41.805341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:64488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.466 [2024-12-09 05:19:41.805350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.466 [2024-12-09 05:19:41.805360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:64496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.466 [2024-12-09 05:19:41.805369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.466 [2024-12-09 05:19:41.805379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:64504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.466 [2024-12-09 05:19:41.805387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.466 [2024-12-09 05:19:41.805397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:64512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.466 [2024-12-09 05:19:41.805406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.466 [2024-12-09 05:19:41.805416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:63880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.466 [2024-12-09 05:19:41.805424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.466 [2024-12-09 05:19:41.805434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:63888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.466 [2024-12-09 05:19:41.805443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.466 [2024-12-09 05:19:41.805453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:63896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.466 [2024-12-09 05:19:41.805466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.466 [2024-12-09 05:19:41.805477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:63904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.466 [2024-12-09 05:19:41.805486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.466 [2024-12-09 05:19:41.805496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:63912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.466 [2024-12-09 05:19:41.805504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.466 [2024-12-09 05:19:41.805514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:63920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.466 [2024-12-09 05:19:41.805523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.466 [2024-12-09 05:19:41.805533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:63928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.466 [2024-12-09 05:19:41.805542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.466 [2024-12-09 05:19:41.805552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:63936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.466 [2024-12-09 05:19:41.805561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.466 [2024-12-09 05:19:41.805571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:64520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.466 [2024-12-09 05:19:41.805580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.466 [2024-12-09 05:19:41.805590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:64528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.466 [2024-12-09 05:19:41.805607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.466 [2024-12-09 05:19:41.805617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:64536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.466 [2024-12-09 05:19:41.805626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.466 [2024-12-09 05:19:41.805636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:64544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.466 [2024-12-09 05:19:41.805645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.466 [2024-12-09 05:19:41.805655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:64552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.466 [2024-12-09 05:19:41.805663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.466 [2024-12-09 05:19:41.805673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:64560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.466 [2024-12-09 05:19:41.805682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.466 [2024-12-09 05:19:41.805692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:64568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.466 [2024-12-09 05:19:41.805701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.466 [2024-12-09 05:19:41.805718] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb68370 is same with the state(6) to be set 01:25:10.466 [2024-12-09 05:19:41.805729] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:25:10.466 [2024-12-09 05:19:41.805736] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:25:10.466 [2024-12-09 05:19:41.805743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64576 len:8 PRP1 0x0 PRP2 0x0 01:25:10.466 [2024-12-09 05:19:41.805752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.466 [2024-12-09 05:19:41.805800] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 01:25:10.466 [2024-12-09 05:19:41.805845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:25:10.466 [2024-12-09 05:19:41.805856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.466 [2024-12-09 05:19:41.805866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:25:10.466 [2024-12-09 05:19:41.805874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.466 [2024-12-09 05:19:41.805883] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:25:10.466 [2024-12-09 05:19:41.805892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.466 [2024-12-09 05:19:41.805901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:25:10.466 [2024-12-09 05:19:41.805910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.466 [2024-12-09 05:19:41.805919] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 01:25:10.466 [2024-12-09 05:19:41.808720] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 01:25:10.466 [2024-12-09 05:19:41.808754] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaf4c60 (9): Bad file descriptor 01:25:10.466 [2024-12-09 05:19:41.834266] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 01:25:10.466 8492.20 IOPS, 33.17 MiB/s [2024-12-09T05:19:52.922Z] 8514.17 IOPS, 33.26 MiB/s [2024-12-09T05:19:52.923Z] 8499.00 IOPS, 33.20 MiB/s [2024-12-09T05:19:52.923Z] 8520.25 IOPS, 33.28 MiB/s [2024-12-09T05:19:52.923Z] 8672.56 IOPS, 33.88 MiB/s [2024-12-09T05:19:52.923Z] [2024-12-09 05:19:46.238956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:126528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.467 [2024-12-09 05:19:46.239000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.467 [2024-12-09 05:19:46.239019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:126536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.467 [2024-12-09 05:19:46.239029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.467 [2024-12-09 05:19:46.239040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:126544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.467 [2024-12-09 05:19:46.239049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.467 [2024-12-09 05:19:46.239084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:126936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.467 [2024-12-09 05:19:46.239094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.467 [2024-12-09 05:19:46.239150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:126944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.467 [2024-12-09 05:19:46.239159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.467 [2024-12-09 05:19:46.239170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:126952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.467 [2024-12-09 05:19:46.239179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.467 [2024-12-09 05:19:46.239190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:126960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.467 [2024-12-09 05:19:46.239199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.467 [2024-12-09 05:19:46.239210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:126968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.467 [2024-12-09 05:19:46.239218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.467 [2024-12-09 05:19:46.239229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:126976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.467 [2024-12-09 05:19:46.239238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.467 [2024-12-09 05:19:46.239249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:126984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.467 [2024-12-09 05:19:46.239259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.467 [2024-12-09 05:19:46.239269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:126992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.467 [2024-12-09 05:19:46.239277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.467 [2024-12-09 05:19:46.239288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:126552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.467 [2024-12-09 05:19:46.239296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.467 [2024-12-09 05:19:46.239306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:126560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.467 [2024-12-09 05:19:46.239315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.467 [2024-12-09 05:19:46.239337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:126568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.467 [2024-12-09 05:19:46.239347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.467 [2024-12-09 05:19:46.239358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:126576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.467 [2024-12-09 05:19:46.239367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.467 [2024-12-09 05:19:46.239377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:126584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.467 [2024-12-09 05:19:46.239386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.467 [2024-12-09 05:19:46.239396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:126592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.467 [2024-12-09 05:19:46.239412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.467 [2024-12-09 05:19:46.239423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:126600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.467 [2024-12-09 05:19:46.239431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.467 [2024-12-09 05:19:46.239442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:126608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.467 [2024-12-09 05:19:46.239451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.467 [2024-12-09 05:19:46.239461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:126616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.467 [2024-12-09 05:19:46.239470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.467 [2024-12-09 05:19:46.239480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:126624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.467 [2024-12-09 05:19:46.239489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.467 [2024-12-09 05:19:46.239499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:126632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.467 [2024-12-09 05:19:46.239508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.467 [2024-12-09 05:19:46.239518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:126640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.467 [2024-12-09 05:19:46.239527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.467 [2024-12-09 05:19:46.239538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:126648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.467 [2024-12-09 05:19:46.239546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.467 [2024-12-09 05:19:46.239557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:126656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.467 [2024-12-09 05:19:46.239566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.467 [2024-12-09 05:19:46.239576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:126664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.467 [2024-12-09 05:19:46.239585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.467 [2024-12-09 05:19:46.239595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:126672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.467 [2024-12-09 05:19:46.239604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.467 [2024-12-09 05:19:46.239615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:127000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.467 [2024-12-09 05:19:46.239624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.467 [2024-12-09 05:19:46.239634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:127008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.467 [2024-12-09 05:19:46.239643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.467 [2024-12-09 05:19:46.239658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:127016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.467 [2024-12-09 05:19:46.239667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.467 [2024-12-09 05:19:46.239677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:127024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.467 [2024-12-09 05:19:46.239686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.467 [2024-12-09 05:19:46.239696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:127032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.467 [2024-12-09 05:19:46.239705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.467 [2024-12-09 05:19:46.239715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:127040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.467 [2024-12-09 05:19:46.239724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.467 [2024-12-09 05:19:46.239734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:127048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.468 [2024-12-09 05:19:46.239743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.468 [2024-12-09 05:19:46.239754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:127056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.468 [2024-12-09 05:19:46.239762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.468 [2024-12-09 05:19:46.239772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:127064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.468 [2024-12-09 05:19:46.239781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.468 [2024-12-09 05:19:46.239792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:127072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.468 [2024-12-09 05:19:46.239801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.468 [2024-12-09 05:19:46.239810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:127080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.468 [2024-12-09 05:19:46.239819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.468 [2024-12-09 05:19:46.239830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:127088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.468 [2024-12-09 05:19:46.239839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.468 [2024-12-09 05:19:46.239849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:127096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.468 [2024-12-09 05:19:46.239857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.468 [2024-12-09 05:19:46.239867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:127104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.468 [2024-12-09 05:19:46.239876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.468 [2024-12-09 05:19:46.239886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:127112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.468 [2024-12-09 05:19:46.239901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.468 [2024-12-09 05:19:46.239911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:127120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.468 [2024-12-09 05:19:46.239920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.468 [2024-12-09 05:19:46.239931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:126680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.468 [2024-12-09 05:19:46.239940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.468 [2024-12-09 05:19:46.239950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:126688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.468 [2024-12-09 05:19:46.239959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.468 [2024-12-09 05:19:46.239969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:126696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.468 [2024-12-09 05:19:46.239977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.468 [2024-12-09 05:19:46.239988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:126704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.468 [2024-12-09 05:19:46.239997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.468 [2024-12-09 05:19:46.240008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:126712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.468 [2024-12-09 05:19:46.240017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.468 [2024-12-09 05:19:46.240027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:126720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.468 [2024-12-09 05:19:46.240036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.468 [2024-12-09 05:19:46.240046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:126728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.468 [2024-12-09 05:19:46.240068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.468 [2024-12-09 05:19:46.240077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:126736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.468 [2024-12-09 05:19:46.240086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.468 [2024-12-09 05:19:46.240095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:127128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.468 [2024-12-09 05:19:46.240103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.468 [2024-12-09 05:19:46.240113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:127136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.468 [2024-12-09 05:19:46.240121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.468 [2024-12-09 05:19:46.240131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:127144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.468 [2024-12-09 05:19:46.240139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.468 [2024-12-09 05:19:46.240153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:127152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.468 [2024-12-09 05:19:46.240161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.468 [2024-12-09 05:19:46.240171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:127160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.468 [2024-12-09 05:19:46.240179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.468 [2024-12-09 05:19:46.240189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:127168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.468 [2024-12-09 05:19:46.240197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.468 [2024-12-09 05:19:46.240207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:127176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.468 [2024-12-09 05:19:46.240215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.468 [2024-12-09 05:19:46.240225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:127184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.468 [2024-12-09 05:19:46.240234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.468 [2024-12-09 05:19:46.240244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:127192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.468 [2024-12-09 05:19:46.240252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.468 [2024-12-09 05:19:46.240262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:127200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.468 [2024-12-09 05:19:46.240270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.468 [2024-12-09 05:19:46.240280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:127208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.468 [2024-12-09 05:19:46.240288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.468 [2024-12-09 05:19:46.240299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:127216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.468 [2024-12-09 05:19:46.240308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.468 [2024-12-09 05:19:46.240317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:127224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.468 [2024-12-09 05:19:46.240326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.468 [2024-12-09 05:19:46.240343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:127232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.468 [2024-12-09 05:19:46.240352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.468 [2024-12-09 05:19:46.240361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:127240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.468 [2024-12-09 05:19:46.240370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.468 [2024-12-09 05:19:46.240379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:127248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.468 [2024-12-09 05:19:46.240388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.468 [2024-12-09 05:19:46.240402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:127256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.469 [2024-12-09 05:19:46.240410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.469 [2024-12-09 05:19:46.240420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:127264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.469 [2024-12-09 05:19:46.240429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.469 [2024-12-09 05:19:46.240439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:127272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.469 [2024-12-09 05:19:46.240447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.469 [2024-12-09 05:19:46.240457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:127280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.469 [2024-12-09 05:19:46.240465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.469 [2024-12-09 05:19:46.240475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:126744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.469 [2024-12-09 05:19:46.240483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.469 [2024-12-09 05:19:46.240493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:126752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.469 [2024-12-09 05:19:46.240502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.469 [2024-12-09 05:19:46.240512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:126760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.469 [2024-12-09 05:19:46.240520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.469 [2024-12-09 05:19:46.240530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:126768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.469 [2024-12-09 05:19:46.240538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.469 [2024-12-09 05:19:46.240548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:126776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.469 [2024-12-09 05:19:46.240556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.469 [2024-12-09 05:19:46.240566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:126784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.469 [2024-12-09 05:19:46.240574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.469 [2024-12-09 05:19:46.240584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:126792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.469 [2024-12-09 05:19:46.240592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.469 [2024-12-09 05:19:46.240602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:126800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.469 [2024-12-09 05:19:46.240611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.469 [2024-12-09 05:19:46.240621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:127288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.469 [2024-12-09 05:19:46.240633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.469 [2024-12-09 05:19:46.240643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:127296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.469 [2024-12-09 05:19:46.240651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.469 [2024-12-09 05:19:46.240661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:127304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.469 [2024-12-09 05:19:46.240669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.469 [2024-12-09 05:19:46.240679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:127312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.469 [2024-12-09 05:19:46.240687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.469 [2024-12-09 05:19:46.240697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:127320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.469 [2024-12-09 05:19:46.240705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.469 [2024-12-09 05:19:46.240714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:127328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.469 [2024-12-09 05:19:46.240723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.469 [2024-12-09 05:19:46.240733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:127336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.469 [2024-12-09 05:19:46.240742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.469 [2024-12-09 05:19:46.240752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:127344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.469 [2024-12-09 05:19:46.240760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.469 [2024-12-09 05:19:46.240770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:127352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.469 [2024-12-09 05:19:46.240778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.469 [2024-12-09 05:19:46.240788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:127360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.469 [2024-12-09 05:19:46.240796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.469 [2024-12-09 05:19:46.240806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:127368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.469 [2024-12-09 05:19:46.240814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.469 [2024-12-09 05:19:46.240824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:127376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.469 [2024-12-09 05:19:46.240832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.469 [2024-12-09 05:19:46.240842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:127384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.469 [2024-12-09 05:19:46.240850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.469 [2024-12-09 05:19:46.240864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:127392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.469 [2024-12-09 05:19:46.240872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.469 [2024-12-09 05:19:46.240882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:127400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.469 [2024-12-09 05:19:46.240890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.469 [2024-12-09 05:19:46.240901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:127408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.469 [2024-12-09 05:19:46.240909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.469 [2024-12-09 05:19:46.240919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:127416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.469 [2024-12-09 05:19:46.240928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.469 [2024-12-09 05:19:46.240938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:127424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:25:10.469 [2024-12-09 05:19:46.240946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.469 [2024-12-09 05:19:46.240956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:126808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.469 [2024-12-09 05:19:46.240964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.469 [2024-12-09 05:19:46.240974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:126816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.469 [2024-12-09 05:19:46.240982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.469 [2024-12-09 05:19:46.240992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:126824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.469 [2024-12-09 05:19:46.241000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.469 [2024-12-09 05:19:46.241010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:126832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.469 [2024-12-09 05:19:46.241019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.469 [2024-12-09 05:19:46.241028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:126840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.469 [2024-12-09 05:19:46.241037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.469 [2024-12-09 05:19:46.241046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:126848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.469 [2024-12-09 05:19:46.241055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.469 [2024-12-09 05:19:46.241065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:126856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.469 [2024-12-09 05:19:46.241073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.469 [2024-12-09 05:19:46.241083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:126864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.469 [2024-12-09 05:19:46.241095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.469 [2024-12-09 05:19:46.241104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:126872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.469 [2024-12-09 05:19:46.241112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.470 [2024-12-09 05:19:46.241123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:126880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.470 [2024-12-09 05:19:46.241131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.470 [2024-12-09 05:19:46.241141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:126888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.470 [2024-12-09 05:19:46.241149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.470 [2024-12-09 05:19:46.241159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:126896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.470 [2024-12-09 05:19:46.241167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.470 [2024-12-09 05:19:46.241177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:126904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.470 [2024-12-09 05:19:46.241185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.470 [2024-12-09 05:19:46.241195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:126912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.470 [2024-12-09 05:19:46.241204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.470 [2024-12-09 05:19:46.241214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:126920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:25:10.470 [2024-12-09 05:19:46.241222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.470 [2024-12-09 05:19:46.241232] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb648b0 is same with the state(6) to be set 01:25:10.470 [2024-12-09 05:19:46.241243] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:25:10.470 [2024-12-09 05:19:46.241249] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:25:10.470 [2024-12-09 05:19:46.241255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:126928 len:8 PRP1 0x0 PRP2 0x0 01:25:10.470 [2024-12-09 05:19:46.241264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.470 [2024-12-09 05:19:46.241273] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:25:10.470 [2024-12-09 05:19:46.241279] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:25:10.470 [2024-12-09 05:19:46.241286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127432 len:8 PRP1 0x0 PRP2 0x0 01:25:10.470 [2024-12-09 05:19:46.241294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.470 [2024-12-09 05:19:46.241303] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:25:10.470 [2024-12-09 05:19:46.241308] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:25:10.470 [2024-12-09 05:19:46.241315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127440 len:8 PRP1 0x0 PRP2 0x0 01:25:10.470 [2024-12-09 05:19:46.241354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.470 [2024-12-09 05:19:46.241365] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:25:10.470 [2024-12-09 05:19:46.241371] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:25:10.470 [2024-12-09 05:19:46.241377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127448 len:8 PRP1 0x0 PRP2 0x0 01:25:10.470 [2024-12-09 05:19:46.241385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.470 [2024-12-09 05:19:46.241393] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:25:10.470 [2024-12-09 05:19:46.241399] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:25:10.470 [2024-12-09 05:19:46.241405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127456 len:8 PRP1 0x0 PRP2 0x0 01:25:10.470 [2024-12-09 05:19:46.241414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.470 [2024-12-09 05:19:46.241422] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:25:10.470 [2024-12-09 05:19:46.241428] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:25:10.470 [2024-12-09 05:19:46.241434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127464 len:8 PRP1 0x0 PRP2 0x0 01:25:10.470 [2024-12-09 05:19:46.241442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.470 [2024-12-09 05:19:46.241451] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:25:10.470 [2024-12-09 05:19:46.241457] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:25:10.470 [2024-12-09 05:19:46.241463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127472 len:8 PRP1 0x0 PRP2 0x0 01:25:10.470 [2024-12-09 05:19:46.241472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.470 [2024-12-09 05:19:46.241480] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:25:10.470 [2024-12-09 05:19:46.241486] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:25:10.470 [2024-12-09 05:19:46.241492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127480 len:8 PRP1 0x0 PRP2 0x0 01:25:10.470 [2024-12-09 05:19:46.241501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.470 [2024-12-09 05:19:46.241509] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:25:10.470 [2024-12-09 05:19:46.241515] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:25:10.470 [2024-12-09 05:19:46.241521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127488 len:8 PRP1 0x0 PRP2 0x0 01:25:10.470 [2024-12-09 05:19:46.241529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.470 [2024-12-09 05:19:46.241537] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:25:10.470 [2024-12-09 05:19:46.241543] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:25:10.470 [2024-12-09 05:19:46.241549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127496 len:8 PRP1 0x0 PRP2 0x0 01:25:10.470 [2024-12-09 05:19:46.241557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.470 [2024-12-09 05:19:46.241565] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:25:10.470 [2024-12-09 05:19:46.241571] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:25:10.470 [2024-12-09 05:19:46.241581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127504 len:8 PRP1 0x0 PRP2 0x0 01:25:10.470 [2024-12-09 05:19:46.241590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.470 [2024-12-09 05:19:46.241598] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:25:10.470 [2024-12-09 05:19:46.241604] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:25:10.470 [2024-12-09 05:19:46.241610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127512 len:8 PRP1 0x0 PRP2 0x0 01:25:10.470 [2024-12-09 05:19:46.241618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.470 [2024-12-09 05:19:46.241627] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:25:10.470 [2024-12-09 05:19:46.241632] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:25:10.470 [2024-12-09 05:19:46.241639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127520 len:8 PRP1 0x0 PRP2 0x0 01:25:10.470 [2024-12-09 05:19:46.241647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.470 [2024-12-09 05:19:46.241655] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:25:10.470 [2024-12-09 05:19:46.241662] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:25:10.470 [2024-12-09 05:19:46.241668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127528 len:8 PRP1 0x0 PRP2 0x0 01:25:10.471 [2024-12-09 05:19:46.241676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.471 [2024-12-09 05:19:46.241684] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:25:10.471 [2024-12-09 05:19:46.241690] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:25:10.471 [2024-12-09 05:19:46.241696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127536 len:8 PRP1 0x0 PRP2 0x0 01:25:10.471 [2024-12-09 05:19:46.241704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.471 [2024-12-09 05:19:46.241712] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:25:10.471 [2024-12-09 05:19:46.241718] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:25:10.471 [2024-12-09 05:19:46.241724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127544 len:8 PRP1 0x0 PRP2 0x0 01:25:10.471 [2024-12-09 05:19:46.241732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.471 [2024-12-09 05:19:46.241778] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 01:25:10.471 [2024-12-09 05:19:46.241823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:25:10.471 [2024-12-09 05:19:46.241834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.471 [2024-12-09 05:19:46.241843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:25:10.471 [2024-12-09 05:19:46.241852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.471 [2024-12-09 05:19:46.241860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:25:10.471 [2024-12-09 05:19:46.241868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.471 [2024-12-09 05:19:46.241882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:25:10.471 [2024-12-09 05:19:46.241891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:10.471 [2024-12-09 05:19:46.241899] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 01:25:10.471 [2024-12-09 05:19:46.241935] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaf4c60 (9): Bad file descriptor 01:25:10.471 [2024-12-09 05:19:46.244626] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 01:25:10.471 [2024-12-09 05:19:46.270927] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 01:25:10.471 8644.30 IOPS, 33.77 MiB/s [2024-12-09T05:19:52.927Z] 8652.09 IOPS, 33.80 MiB/s [2024-12-09T05:19:52.927Z] 8664.00 IOPS, 33.84 MiB/s [2024-12-09T05:19:52.927Z] 8656.00 IOPS, 33.81 MiB/s [2024-12-09T05:19:52.927Z] 8630.86 IOPS, 33.71 MiB/s [2024-12-09T05:19:52.927Z] 8626.73 IOPS, 33.70 MiB/s 01:25:10.471 Latency(us) 01:25:10.471 [2024-12-09T05:19:52.927Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:25:10.471 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 01:25:10.471 Verification LBA range: start 0x0 length 0x4000 01:25:10.471 NVMe0n1 : 15.01 8625.35 33.69 288.69 0.00 14334.14 457.89 16598.64 01:25:10.471 [2024-12-09T05:19:52.927Z] =================================================================================================================== 01:25:10.471 [2024-12-09T05:19:52.927Z] Total : 8625.35 33.69 288.69 0.00 14334.14 457.89 16598.64 01:25:10.471 Received shutdown signal, test time was about 15.000000 seconds 01:25:10.471 01:25:10.471 Latency(us) 01:25:10.471 [2024-12-09T05:19:52.927Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:25:10.471 [2024-12-09T05:19:52.927Z] =================================================================================================================== 01:25:10.471 [2024-12-09T05:19:52.927Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:25:10.471 05:19:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 01:25:10.471 05:19:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 01:25:10.471 05:19:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 01:25:10.471 05:19:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 01:25:10.471 05:19:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=75374 01:25:10.471 05:19:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 75374 /var/tmp/bdevperf.sock 01:25:10.471 05:19:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 75374 ']' 01:25:10.471 05:19:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:25:10.471 05:19:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 01:25:10.471 05:19:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:25:10.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:25:10.471 05:19:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 01:25:10.471 05:19:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 01:25:11.421 05:19:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:25:11.421 05:19:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 01:25:11.421 05:19:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 01:25:11.421 [2024-12-09 05:19:53.685268] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 01:25:11.421 05:19:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 01:25:11.682 [2024-12-09 05:19:53.913083] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 01:25:11.682 05:19:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 01:25:11.949 NVMe0n1 01:25:11.949 05:19:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 01:25:12.244 01:25:12.244 05:19:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 01:25:12.502 01:25:12.502 05:19:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 01:25:12.502 05:19:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 01:25:12.761 05:19:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 01:25:13.019 05:19:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 01:25:16.306 05:19:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 01:25:16.306 05:19:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 01:25:16.306 05:19:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=75451 01:25:16.306 05:19:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:25:16.306 05:19:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 75451 01:25:17.242 { 01:25:17.242 "results": [ 01:25:17.242 { 01:25:17.242 "job": "NVMe0n1", 01:25:17.242 "core_mask": "0x1", 01:25:17.242 "workload": "verify", 01:25:17.242 "status": "finished", 01:25:17.242 "verify_range": { 01:25:17.242 "start": 0, 01:25:17.242 "length": 16384 01:25:17.242 }, 01:25:17.242 "queue_depth": 128, 01:25:17.242 "io_size": 4096, 01:25:17.242 "runtime": 1.009392, 01:25:17.242 "iops": 9456.18748712096, 01:25:17.242 "mibps": 36.93823237156625, 01:25:17.242 "io_failed": 0, 01:25:17.242 "io_timeout": 0, 01:25:17.242 "avg_latency_us": 13467.155461717764, 01:25:17.242 "min_latency_us": 1345.0620087336245, 01:25:17.242 "max_latency_us": 12763.779912663755 01:25:17.242 } 01:25:17.242 ], 01:25:17.242 "core_count": 1 01:25:17.242 } 01:25:17.242 05:19:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 01:25:17.242 [2024-12-09 05:19:52.633754] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:25:17.242 [2024-12-09 05:19:52.633860] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75374 ] 01:25:17.242 [2024-12-09 05:19:52.784465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:25:17.242 [2024-12-09 05:19:52.864187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:25:17.242 [2024-12-09 05:19:52.942821] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:25:17.242 [2024-12-09 05:19:55.217568] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 01:25:17.242 [2024-12-09 05:19:55.217716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:25:17.242 [2024-12-09 05:19:55.217735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:17.242 [2024-12-09 05:19:55.217749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:25:17.242 [2024-12-09 05:19:55.217760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:17.242 [2024-12-09 05:19:55.217771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:25:17.242 [2024-12-09 05:19:55.217782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:17.242 [2024-12-09 05:19:55.217793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:25:17.242 [2024-12-09 05:19:55.217803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:17.242 [2024-12-09 05:19:55.217814] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 01:25:17.242 [2024-12-09 05:19:55.217856] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 01:25:17.242 [2024-12-09 05:19:55.217879] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x898c60 (9): Bad file descriptor 01:25:17.242 [2024-12-09 05:19:55.222812] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 01:25:17.242 Running I/O for 1 seconds... 01:25:17.242 9409.00 IOPS, 36.75 MiB/s 01:25:17.242 Latency(us) 01:25:17.242 [2024-12-09T05:19:59.698Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:25:17.242 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 01:25:17.242 Verification LBA range: start 0x0 length 0x4000 01:25:17.242 NVMe0n1 : 1.01 9456.19 36.94 0.00 0.00 13467.16 1345.06 12763.78 01:25:17.242 [2024-12-09T05:19:59.698Z] =================================================================================================================== 01:25:17.242 [2024-12-09T05:19:59.698Z] Total : 9456.19 36.94 0.00 0.00 13467.16 1345.06 12763.78 01:25:17.242 05:19:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 01:25:17.242 05:19:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 01:25:17.501 05:19:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 01:25:17.761 05:20:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 01:25:17.761 05:20:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 01:25:18.021 05:20:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 01:25:18.021 05:20:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 01:25:21.310 05:20:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 01:25:21.310 05:20:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 01:25:21.310 05:20:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 75374 01:25:21.311 05:20:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 75374 ']' 01:25:21.311 05:20:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 75374 01:25:21.311 05:20:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 01:25:21.311 05:20:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:25:21.311 05:20:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75374 01:25:21.311 05:20:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:25:21.311 05:20:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:25:21.311 05:20:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75374' 01:25:21.311 killing process with pid 75374 01:25:21.311 05:20:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 75374 01:25:21.311 05:20:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 75374 01:25:21.570 05:20:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 01:25:21.829 05:20:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:25:21.829 05:20:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 01:25:21.829 05:20:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 01:25:21.829 05:20:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 01:25:21.829 05:20:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 01:25:21.829 05:20:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 01:25:21.829 05:20:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:25:21.829 05:20:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 01:25:21.829 05:20:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 01:25:21.829 05:20:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:25:21.829 rmmod nvme_tcp 01:25:21.829 rmmod nvme_fabrics 01:25:21.829 rmmod nvme_keyring 01:25:22.089 05:20:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:25:22.089 05:20:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 01:25:22.089 05:20:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 01:25:22.089 05:20:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 75127 ']' 01:25:22.089 05:20:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 75127 01:25:22.089 05:20:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 75127 ']' 01:25:22.089 05:20:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 75127 01:25:22.089 05:20:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 01:25:22.089 05:20:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:25:22.089 05:20:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75127 01:25:22.089 05:20:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:25:22.089 05:20:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:25:22.089 killing process with pid 75127 01:25:22.089 05:20:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75127' 01:25:22.089 05:20:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 75127 01:25:22.089 05:20:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 75127 01:25:22.348 05:20:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:25:22.348 05:20:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:25:22.348 05:20:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:25:22.348 05:20:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 01:25:22.348 05:20:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:25:22.348 05:20:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 01:25:22.348 05:20:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 01:25:22.349 05:20:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:25:22.349 05:20:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:25:22.349 05:20:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:25:22.349 05:20:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:25:22.349 05:20:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:25:22.349 05:20:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:25:22.349 05:20:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:25:22.349 05:20:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:25:22.349 05:20:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:25:22.349 05:20:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:25:22.349 05:20:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:25:22.349 05:20:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:25:22.609 05:20:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:25:22.609 05:20:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:25:22.609 05:20:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:25:22.609 05:20:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 01:25:22.609 05:20:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:25:22.609 05:20:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:25:22.609 05:20:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:25:22.609 05:20:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 01:25:22.609 01:25:22.609 real 0m31.699s 01:25:22.609 user 2m0.809s 01:25:22.609 sys 0m4.952s 01:25:22.609 05:20:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 01:25:22.609 ************************************ 01:25:22.609 END TEST nvmf_failover 01:25:22.609 05:20:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 01:25:22.609 ************************************ 01:25:22.609 05:20:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 01:25:22.609 05:20:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:25:22.609 05:20:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 01:25:22.609 05:20:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 01:25:22.609 ************************************ 01:25:22.609 START TEST nvmf_host_discovery 01:25:22.609 ************************************ 01:25:22.609 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 01:25:22.877 * Looking for test storage... 01:25:22.877 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:25:22.877 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:25:22.877 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 01:25:22.877 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:25:22.877 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:25:22.877 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:25:22.877 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 01:25:22.877 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 01:25:22.877 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 01:25:22.877 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 01:25:22.877 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 01:25:22.877 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 01:25:22.877 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 01:25:22.877 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 01:25:22.877 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 01:25:22.877 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:25:22.877 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 01:25:22.877 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 01:25:22.877 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 01:25:22.877 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:25:22.877 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 01:25:22.877 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 01:25:22.877 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:25:22.877 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 01:25:22.877 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 01:25:22.877 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 01:25:22.877 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 01:25:22.877 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:25:22.877 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 01:25:22.877 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 01:25:22.877 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:25:22.877 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:25:22.877 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 01:25:22.877 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:25:22.877 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:25:22.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:25:22.877 --rc genhtml_branch_coverage=1 01:25:22.877 --rc genhtml_function_coverage=1 01:25:22.877 --rc genhtml_legend=1 01:25:22.877 --rc geninfo_all_blocks=1 01:25:22.877 --rc geninfo_unexecuted_blocks=1 01:25:22.877 01:25:22.877 ' 01:25:22.877 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:25:22.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:25:22.877 --rc genhtml_branch_coverage=1 01:25:22.877 --rc genhtml_function_coverage=1 01:25:22.877 --rc genhtml_legend=1 01:25:22.877 --rc geninfo_all_blocks=1 01:25:22.877 --rc geninfo_unexecuted_blocks=1 01:25:22.877 01:25:22.877 ' 01:25:22.877 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:25:22.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:25:22.877 --rc genhtml_branch_coverage=1 01:25:22.877 --rc genhtml_function_coverage=1 01:25:22.877 --rc genhtml_legend=1 01:25:22.877 --rc geninfo_all_blocks=1 01:25:22.877 --rc geninfo_unexecuted_blocks=1 01:25:22.877 01:25:22.877 ' 01:25:22.877 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:25:22.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:25:22.877 --rc genhtml_branch_coverage=1 01:25:22.877 --rc genhtml_function_coverage=1 01:25:22.877 --rc genhtml_legend=1 01:25:22.877 --rc geninfo_all_blocks=1 01:25:22.877 --rc geninfo_unexecuted_blocks=1 01:25:22.877 01:25:22.877 ' 01:25:22.877 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:25:22.877 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 01:25:22.877 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:25:22.877 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:25:22.877 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:25:22.878 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:25:22.878 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:25:22.878 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:25:22.878 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:25:22.878 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:25:22.878 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:25:22.878 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:25:22.878 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:25:22.878 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:25:22.878 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:25:22.878 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:25:22.878 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:25:22.878 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:25:22.878 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:25:22.878 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 01:25:22.878 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:25:22.878 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:25:22.878 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:25:22.878 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:25:22.878 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:25:22.878 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:25:22.878 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 01:25:22.878 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:25:22.878 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 01:25:22.878 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:25:22.878 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:25:22.878 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:25:22.878 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:25:22.878 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:25:22.878 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:25:22.878 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:25:22.878 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:25:22.878 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:25:22.878 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 01:25:22.878 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 01:25:22.878 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 01:25:22.878 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 01:25:22.878 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 01:25:22.878 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 01:25:22.878 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 01:25:22.878 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 01:25:22.878 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:25:22.878 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:25:22.878 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 01:25:22.878 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 01:25:22.878 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 01:25:22.878 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:25:22.878 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:25:22.878 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:25:22.878 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:25:22.878 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:25:22.878 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:25:22.878 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:25:22.878 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:25:22.878 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 01:25:22.878 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:25:22.878 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:25:22.878 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:25:22.878 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:25:22.878 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:25:22.878 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:25:22.878 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:25:22.878 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:25:22.878 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:25:22.878 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:25:22.878 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:25:22.878 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:25:22.878 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:25:22.878 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:25:22.878 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:25:22.878 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:25:22.878 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:25:22.878 Cannot find device "nvmf_init_br" 01:25:22.878 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 01:25:22.878 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:25:23.155 Cannot find device "nvmf_init_br2" 01:25:23.155 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 01:25:23.155 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:25:23.155 Cannot find device "nvmf_tgt_br" 01:25:23.155 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 01:25:23.155 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:25:23.155 Cannot find device "nvmf_tgt_br2" 01:25:23.155 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 01:25:23.155 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:25:23.155 Cannot find device "nvmf_init_br" 01:25:23.155 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 01:25:23.155 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:25:23.155 Cannot find device "nvmf_init_br2" 01:25:23.155 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 01:25:23.155 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:25:23.155 Cannot find device "nvmf_tgt_br" 01:25:23.155 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 01:25:23.155 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:25:23.155 Cannot find device "nvmf_tgt_br2" 01:25:23.155 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 01:25:23.155 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:25:23.155 Cannot find device "nvmf_br" 01:25:23.155 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 01:25:23.155 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:25:23.155 Cannot find device "nvmf_init_if" 01:25:23.155 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 01:25:23.155 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:25:23.155 Cannot find device "nvmf_init_if2" 01:25:23.155 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 01:25:23.155 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:25:23.155 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:25:23.155 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 01:25:23.155 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:25:23.155 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:25:23.155 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 01:25:23.155 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:25:23.155 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:25:23.155 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:25:23.155 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:25:23.155 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:25:23.155 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:25:23.155 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:25:23.155 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:25:23.155 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:25:23.155 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:25:23.155 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:25:23.155 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:25:23.155 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:25:23.155 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:25:23.155 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:25:23.155 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:25:23.155 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:25:23.155 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:25:23.155 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:25:23.425 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:25:23.425 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:25:23.425 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:25:23.425 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:25:23.425 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:25:23.425 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:25:23.425 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:25:23.425 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:25:23.425 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:25:23.425 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:25:23.425 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:25:23.425 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:25:23.425 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:25:23.425 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:25:23.425 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:25:23.425 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.118 ms 01:25:23.425 01:25:23.425 --- 10.0.0.3 ping statistics --- 01:25:23.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:25:23.425 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 01:25:23.425 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:25:23.425 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:25:23.425 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.094 ms 01:25:23.425 01:25:23.425 --- 10.0.0.4 ping statistics --- 01:25:23.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:25:23.425 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 01:25:23.425 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:25:23.425 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:25:23.425 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 01:25:23.425 01:25:23.425 --- 10.0.0.1 ping statistics --- 01:25:23.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:25:23.425 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 01:25:23.425 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:25:23.425 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:25:23.425 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 01:25:23.425 01:25:23.425 --- 10.0.0.2 ping statistics --- 01:25:23.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:25:23.425 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 01:25:23.425 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:25:23.425 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@461 -- # return 0 01:25:23.425 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:25:23.425 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:25:23.425 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:25:23.425 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:25:23.425 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:25:23.425 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:25:23.425 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:25:23.425 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 01:25:23.425 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:25:23.425 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 01:25:23.425 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:25:23.425 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=75778 01:25:23.425 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 01:25:23.425 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 75778 01:25:23.425 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 75778 ']' 01:25:23.425 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:25:23.425 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 01:25:23.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:25:23.425 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:25:23.425 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 01:25:23.425 05:20:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:25:23.425 [2024-12-09 05:20:05.785593] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:25:23.425 [2024-12-09 05:20:05.785654] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:25:23.685 [2024-12-09 05:20:05.935056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:25:23.685 [2024-12-09 05:20:06.009280] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:25:23.685 [2024-12-09 05:20:06.009334] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:25:23.685 [2024-12-09 05:20:06.009341] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:25:23.685 [2024-12-09 05:20:06.009346] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:25:23.685 [2024-12-09 05:20:06.009350] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:25:23.685 [2024-12-09 05:20:06.009719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:25:23.685 [2024-12-09 05:20:06.084047] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:25:24.253 05:20:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:25:24.253 05:20:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 01:25:24.253 05:20:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:25:24.253 05:20:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 01:25:24.253 05:20:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:25:24.253 05:20:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:25:24.253 05:20:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:25:24.253 05:20:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:24.253 05:20:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:25:24.253 [2024-12-09 05:20:06.681525] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:25:24.253 05:20:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:24.253 05:20:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 01:25:24.253 05:20:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:24.253 05:20:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:25:24.253 [2024-12-09 05:20:06.693637] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 01:25:24.253 05:20:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:24.253 05:20:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 01:25:24.253 05:20:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:24.253 05:20:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:25:24.512 null0 01:25:24.513 05:20:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:24.513 05:20:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 01:25:24.513 05:20:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:24.513 05:20:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:25:24.513 null1 01:25:24.513 05:20:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:24.513 05:20:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 01:25:24.513 05:20:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:24.513 05:20:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:25:24.513 05:20:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:24.513 05:20:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=75810 01:25:24.513 05:20:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 01:25:24.513 05:20:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 75810 /tmp/host.sock 01:25:24.513 05:20:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 75810 ']' 01:25:24.513 05:20:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 01:25:24.513 05:20:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 01:25:24.513 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 01:25:24.513 05:20:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 01:25:24.513 05:20:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 01:25:24.513 05:20:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:25:24.513 [2024-12-09 05:20:06.790859] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:25:24.513 [2024-12-09 05:20:06.790917] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75810 ] 01:25:24.513 [2024-12-09 05:20:06.943889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:25:24.772 [2024-12-09 05:20:07.020112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:25:24.772 [2024-12-09 05:20:07.094739] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:25:25.339 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:25:25.339 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 01:25:25.339 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:25:25.339 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 01:25:25.340 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:25.340 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:25:25.340 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:25.340 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 01:25:25.340 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:25.340 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:25:25.340 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:25.340 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 01:25:25.340 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 01:25:25.340 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:25:25.340 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:25.340 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 01:25:25.340 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:25:25.340 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 01:25:25.340 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 01:25:25.340 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:25.340 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 01:25:25.340 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 01:25:25.340 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:25:25.340 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:25:25.340 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:25.340 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:25:25.340 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:25:25.340 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:25:25.340 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:25.340 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 01:25:25.340 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 01:25:25.340 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:25.340 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:25:25.340 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:25.340 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 01:25:25.340 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:25:25.340 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 01:25:25.340 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:25.340 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 01:25:25.340 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:25:25.340 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 01:25:25.340 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:25.597 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 01:25:25.597 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 01:25:25.597 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:25:25.597 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:25:25.597 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:25:25.597 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:25.597 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:25:25.597 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:25:25.597 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:25.597 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 01:25:25.597 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 01:25:25.597 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:25.597 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:25:25.597 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:25.597 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 01:25:25.597 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 01:25:25.597 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:25:25.597 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 01:25:25.597 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 01:25:25.597 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:25.597 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:25:25.597 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:25.597 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 01:25:25.597 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 01:25:25.597 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:25:25.597 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:25:25.597 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:25:25.597 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:25:25.597 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:25.597 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:25:25.597 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:25.597 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 01:25:25.597 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 01:25:25.597 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:25.597 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:25:25.597 [2024-12-09 05:20:07.979399] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:25:25.597 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:25.597 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 01:25:25.597 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 01:25:25.597 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:25:25.597 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:25.597 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:25:25.597 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 01:25:25.597 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 01:25:25.597 05:20:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:25.597 05:20:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 01:25:25.597 05:20:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 01:25:25.597 05:20:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:25:25.597 05:20:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:25.597 05:20:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:25:25.597 05:20:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:25:25.597 05:20:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:25:25.597 05:20:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:25:25.597 05:20:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:25.856 05:20:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 01:25:25.856 05:20:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 01:25:25.856 05:20:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 01:25:25.856 05:20:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 01:25:25.856 05:20:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 01:25:25.856 05:20:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:25:25.856 05:20:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:25:25.856 05:20:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 01:25:25.856 05:20:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 01:25:25.856 05:20:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 01:25:25.856 05:20:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:25.856 05:20:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:25:25.856 05:20:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 01:25:25.856 05:20:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:25.856 05:20:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 01:25:25.856 05:20:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 01:25:25.856 05:20:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 01:25:25.856 05:20:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:25:25.856 05:20:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 01:25:25.856 05:20:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:25.856 05:20:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:25:25.856 05:20:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:25.856 05:20:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 01:25:25.856 05:20:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 01:25:25.856 05:20:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:25:25.856 05:20:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:25:25.856 05:20:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 01:25:25.856 05:20:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 01:25:25.856 05:20:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:25:25.856 05:20:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:25.856 05:20:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:25:25.856 05:20:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 01:25:25.856 05:20:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 01:25:25.856 05:20:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 01:25:25.856 05:20:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:25.856 05:20:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 01:25:25.856 05:20:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 01:25:26.423 [2024-12-09 05:20:08.660538] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 01:25:26.423 [2024-12-09 05:20:08.660571] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 01:25:26.423 [2024-12-09 05:20:08.660589] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 01:25:26.423 [2024-12-09 05:20:08.666585] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 01:25:26.423 [2024-12-09 05:20:08.720844] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 01:25:26.423 [2024-12-09 05:20:08.721948] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1d3fe60:1 started. 01:25:26.423 [2024-12-09 05:20:08.723713] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 01:25:26.423 [2024-12-09 05:20:08.723740] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 01:25:26.423 [2024-12-09 05:20:08.729104] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1d3fe60 was disconnected and freed. delete nvme_qpair. 01:25:26.990 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:25:26.990 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 01:25:26.990 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 01:25:26.990 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:25:26.990 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 01:25:26.990 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:26.991 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:25:26.991 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 01:25:26.991 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 01:25:26.991 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:26.991 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:25:26.991 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:25:26.991 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 01:25:26.991 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 01:25:26.991 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:25:26.991 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:25:26.991 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 01:25:26.991 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 01:25:26.991 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:25:26.991 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:25:26.991 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:25:26.991 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:25:26.991 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:26.991 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:25:26.991 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:26.991 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 01:25:26.991 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:25:26.991 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 01:25:26.991 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 01:25:26.991 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:25:26.991 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:25:26.991 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 01:25:26.991 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 01:25:26.991 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 01:25:26.991 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 01:25:26.991 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:26.991 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 01:25:26.991 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:25:26.991 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 01:25:26.991 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:26.991 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 01:25:26.991 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:25:26.991 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 01:25:26.991 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 01:25:26.991 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 01:25:26.991 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 01:25:26.991 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:25:26.991 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:25:26.991 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 01:25:26.991 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 01:25:26.991 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 01:25:26.991 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 01:25:26.991 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:26.991 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:25:26.991 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:26.991 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 01:25:26.991 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 01:25:26.991 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 01:25:26.991 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:25:26.991 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 01:25:26.991 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:26.991 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:25:26.991 [2024-12-09 05:20:09.440978] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1d4e2f0:1 started. 01:25:26.991 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:26.991 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 01:25:26.991 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 01:25:26.991 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:25:26.991 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:25:26.991 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 01:25:27.251 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 01:25:27.251 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:25:27.251 [2024-12-09 05:20:09.449066] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1d4e2f0 was disconnected and freed. delete nvme_qpair. 01:25:27.251 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:27.251 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:25:27.251 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:25:27.251 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:25:27.251 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:25:27.251 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:27.251 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 01:25:27.251 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:25:27.251 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 01:25:27.251 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 01:25:27.251 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 01:25:27.251 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 01:25:27.251 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:25:27.251 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:25:27.251 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 01:25:27.251 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 01:25:27.251 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 01:25:27.251 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 01:25:27.251 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:27.251 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:25:27.251 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:27.251 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 01:25:27.251 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 01:25:27.251 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 01:25:27.251 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:25:27.251 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 01:25:27.251 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:27.251 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:25:27.251 [2024-12-09 05:20:09.553482] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 01:25:27.251 [2024-12-09 05:20:09.554217] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 01:25:27.251 [2024-12-09 05:20:09.554255] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 01:25:27.251 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:27.251 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 01:25:27.251 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 01:25:27.251 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:25:27.251 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:25:27.251 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 01:25:27.251 [2024-12-09 05:20:09.560175] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 01:25:27.251 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 01:25:27.251 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 01:25:27.251 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:25:27.251 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:27.251 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:25:27.251 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 01:25:27.251 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 01:25:27.251 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:27.251 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:25:27.251 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:25:27.251 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 01:25:27.251 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 01:25:27.251 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:25:27.251 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:25:27.251 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 01:25:27.251 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 01:25:27.251 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:25:27.251 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:27.251 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:25:27.251 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:25:27.251 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:25:27.251 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:25:27.251 [2024-12-09 05:20:09.623744] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 01:25:27.251 [2024-12-09 05:20:09.623793] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 01:25:27.251 [2024-12-09 05:20:09.623800] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 01:25:27.251 [2024-12-09 05:20:09.623804] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 01:25:27.251 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:27.251 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 01:25:27.251 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:25:27.251 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 01:25:27.251 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 01:25:27.251 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:25:27.251 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:25:27.251 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 01:25:27.251 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 01:25:27.251 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 01:25:27.251 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:27.251 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:25:27.251 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 01:25:27.252 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 01:25:27.252 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 01:25:27.252 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:27.252 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 01:25:27.252 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:25:27.512 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 01:25:27.512 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 01:25:27.512 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 01:25:27.512 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 01:25:27.512 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:25:27.512 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:25:27.512 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 01:25:27.512 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 01:25:27.512 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 01:25:27.512 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 01:25:27.512 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:27.512 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:25:27.512 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:27.512 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 01:25:27.512 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 01:25:27.512 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 01:25:27.512 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:25:27.512 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 01:25:27.512 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:27.512 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:25:27.512 [2024-12-09 05:20:09.746026] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 01:25:27.512 [2024-12-09 05:20:09.746070] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 01:25:27.512 [2024-12-09 05:20:09.748975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:25:27.512 [2024-12-09 05:20:09.749007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:27.512 [2024-12-09 05:20:09.749016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:25:27.512 [2024-12-09 05:20:09.749022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:27.512 [2024-12-09 05:20:09.749028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:25:27.512 [2024-12-09 05:20:09.749034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:27.512 [2024-12-09 05:20:09.749040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:25:27.512 [2024-12-09 05:20:09.749045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:25:27.512 [2024-12-09 05:20:09.749051] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1c240 is same with the state(6) to be set 01:25:27.512 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:27.512 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 01:25:27.512 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 01:25:27.512 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:25:27.512 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:25:27.512 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 01:25:27.512 [2024-12-09 05:20:09.752108] bdev_nvme.c:7271:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 01:25:27.512 [2024-12-09 05:20:09.752138] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 01:25:27.512 [2024-12-09 05:20:09.752193] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1c240 (9): Bad file descriptor 01:25:27.512 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 01:25:27.512 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 01:25:27.512 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:25:27.512 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:27.512 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 01:25:27.512 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:25:27.512 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 01:25:27.512 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:27.512 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:25:27.512 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:25:27.512 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 01:25:27.512 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 01:25:27.512 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:25:27.512 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:25:27.512 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 01:25:27.512 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 01:25:27.512 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:25:27.512 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:25:27.512 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:27.512 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:25:27.512 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:25:27.512 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:25:27.512 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:27.512 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 01:25:27.512 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:25:27.512 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 01:25:27.512 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 01:25:27.512 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:25:27.512 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:25:27.512 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 01:25:27.512 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 01:25:27.513 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 01:25:27.513 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 01:25:27.513 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 01:25:27.513 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 01:25:27.513 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:27.513 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:25:27.513 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:27.513 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 01:25:27.513 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:25:27.513 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 01:25:27.513 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 01:25:27.513 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 01:25:27.513 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 01:25:27.513 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:25:27.513 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:25:27.513 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 01:25:27.513 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 01:25:27.513 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 01:25:27.513 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:27.513 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:25:27.513 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 01:25:27.513 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:27.513 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 01:25:27.513 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 01:25:27.513 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 01:25:27.513 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:25:27.513 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 01:25:27.513 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:27.513 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:25:27.513 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:27.513 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 01:25:27.513 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 01:25:27.513 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:25:27.513 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:25:27.513 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 01:25:27.513 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 01:25:27.513 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:25:27.513 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 01:25:27.513 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:27.513 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:25:27.772 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 01:25:27.772 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 01:25:27.772 05:20:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:27.772 05:20:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 01:25:27.772 05:20:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:25:27.772 05:20:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 01:25:27.772 05:20:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 01:25:27.772 05:20:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:25:27.772 05:20:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:25:27.772 05:20:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 01:25:27.772 05:20:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 01:25:27.772 05:20:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:25:27.772 05:20:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:27.772 05:20:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:25:27.772 05:20:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:25:27.772 05:20:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:25:27.773 05:20:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:25:27.773 05:20:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:27.773 05:20:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 01:25:27.773 05:20:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:25:27.773 05:20:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 01:25:27.773 05:20:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 01:25:27.773 05:20:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 01:25:27.773 05:20:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 01:25:27.773 05:20:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:25:27.773 05:20:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:25:27.773 05:20:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 01:25:27.773 05:20:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 01:25:27.773 05:20:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 01:25:27.773 05:20:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 01:25:27.773 05:20:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:27.773 05:20:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:25:27.773 05:20:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:27.773 05:20:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 01:25:27.773 05:20:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 01:25:27.773 05:20:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 01:25:27.773 05:20:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:25:27.773 05:20:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 01:25:27.773 05:20:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:27.773 05:20:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:25:28.712 [2024-12-09 05:20:11.107177] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 01:25:28.712 [2024-12-09 05:20:11.107214] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 01:25:28.712 [2024-12-09 05:20:11.107230] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 01:25:28.712 [2024-12-09 05:20:11.113173] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 01:25:28.972 [2024-12-09 05:20:11.171395] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 01:25:28.972 [2024-12-09 05:20:11.172368] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x1d14de0:1 started. 01:25:28.972 [2024-12-09 05:20:11.174517] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 01:25:28.972 [2024-12-09 05:20:11.174559] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 01:25:28.972 05:20:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:28.972 [2024-12-09 05:20:11.176158] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x1d14de0 was disconnected and freed. delete nvme_qpair. 01:25:28.972 05:20:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 01:25:28.972 05:20:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 01:25:28.972 05:20:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 01:25:28.972 05:20:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:25:28.972 05:20:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:25:28.972 05:20:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:25:28.972 05:20:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:25:28.972 05:20:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 01:25:28.972 05:20:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:28.972 05:20:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:25:28.972 request: 01:25:28.972 { 01:25:28.972 "name": "nvme", 01:25:28.972 "trtype": "tcp", 01:25:28.972 "traddr": "10.0.0.3", 01:25:28.972 "adrfam": "ipv4", 01:25:28.972 "trsvcid": "8009", 01:25:28.972 "hostnqn": "nqn.2021-12.io.spdk:test", 01:25:28.972 "wait_for_attach": true, 01:25:28.972 "method": "bdev_nvme_start_discovery", 01:25:28.972 "req_id": 1 01:25:28.972 } 01:25:28.972 Got JSON-RPC error response 01:25:28.972 response: 01:25:28.972 { 01:25:28.972 "code": -17, 01:25:28.972 "message": "File exists" 01:25:28.972 } 01:25:28.972 05:20:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:25:28.972 05:20:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 01:25:28.972 05:20:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:25:28.972 05:20:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:25:28.972 05:20:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:25:28.972 05:20:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 01:25:28.972 05:20:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 01:25:28.972 05:20:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:28.972 05:20:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:25:28.972 05:20:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 01:25:28.972 05:20:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 01:25:28.972 05:20:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 01:25:28.972 05:20:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:28.972 05:20:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 01:25:28.972 05:20:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 01:25:28.972 05:20:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:25:28.972 05:20:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:28.972 05:20:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:25:28.972 05:20:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:25:28.972 05:20:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:25:28.972 05:20:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:25:28.972 05:20:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:28.972 05:20:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 01:25:28.972 05:20:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 01:25:28.972 05:20:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 01:25:28.972 05:20:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 01:25:28.972 05:20:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:25:28.973 05:20:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:25:28.973 05:20:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:25:28.973 05:20:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:25:28.973 05:20:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 01:25:28.973 05:20:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:28.973 05:20:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:25:28.973 request: 01:25:28.973 { 01:25:28.973 "name": "nvme_second", 01:25:28.973 "trtype": "tcp", 01:25:28.973 "traddr": "10.0.0.3", 01:25:28.973 "adrfam": "ipv4", 01:25:28.973 "trsvcid": "8009", 01:25:28.973 "hostnqn": "nqn.2021-12.io.spdk:test", 01:25:28.973 "wait_for_attach": true, 01:25:28.973 "method": "bdev_nvme_start_discovery", 01:25:28.973 "req_id": 1 01:25:28.973 } 01:25:28.973 Got JSON-RPC error response 01:25:28.973 response: 01:25:28.973 { 01:25:28.973 "code": -17, 01:25:28.973 "message": "File exists" 01:25:28.973 } 01:25:28.973 05:20:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:25:28.973 05:20:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 01:25:28.973 05:20:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:25:28.973 05:20:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:25:28.973 05:20:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:25:28.973 05:20:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 01:25:28.973 05:20:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 01:25:28.973 05:20:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:28.973 05:20:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 01:25:28.973 05:20:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:25:28.973 05:20:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 01:25:28.973 05:20:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 01:25:28.973 05:20:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:28.973 05:20:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 01:25:28.973 05:20:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 01:25:28.973 05:20:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:25:28.973 05:20:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:25:28.973 05:20:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:28.973 05:20:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:25:28.973 05:20:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:25:28.973 05:20:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:25:28.973 05:20:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:28.973 05:20:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 01:25:28.973 05:20:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 01:25:28.973 05:20:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 01:25:28.973 05:20:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 01:25:28.973 05:20:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:25:28.973 05:20:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:25:28.973 05:20:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:25:28.973 05:20:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:25:28.973 05:20:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 01:25:28.973 05:20:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:28.973 05:20:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:25:30.353 [2024-12-09 05:20:12.416484] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 01:25:30.353 [2024-12-09 05:20:12.416548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d16aa0 with addr=10.0.0.3, port=8010 01:25:30.353 [2024-12-09 05:20:12.416573] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 01:25:30.353 [2024-12-09 05:20:12.416580] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 01:25:30.353 [2024-12-09 05:20:12.416586] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 01:25:31.290 [2024-12-09 05:20:13.414575] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 01:25:31.290 [2024-12-09 05:20:13.414649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d16aa0 with addr=10.0.0.3, port=8010 01:25:31.290 [2024-12-09 05:20:13.414674] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 01:25:31.290 [2024-12-09 05:20:13.414682] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 01:25:31.290 [2024-12-09 05:20:13.414689] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 01:25:32.224 [2024-12-09 05:20:14.412462] bdev_nvme.c:7527:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 01:25:32.224 request: 01:25:32.224 { 01:25:32.224 "name": "nvme_second", 01:25:32.224 "trtype": "tcp", 01:25:32.224 "traddr": "10.0.0.3", 01:25:32.224 "adrfam": "ipv4", 01:25:32.224 "trsvcid": "8010", 01:25:32.224 "hostnqn": "nqn.2021-12.io.spdk:test", 01:25:32.224 "wait_for_attach": false, 01:25:32.224 "attach_timeout_ms": 3000, 01:25:32.224 "method": "bdev_nvme_start_discovery", 01:25:32.224 "req_id": 1 01:25:32.224 } 01:25:32.224 Got JSON-RPC error response 01:25:32.224 response: 01:25:32.224 { 01:25:32.224 "code": -110, 01:25:32.224 "message": "Connection timed out" 01:25:32.224 } 01:25:32.224 05:20:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:25:32.224 05:20:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 01:25:32.224 05:20:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:25:32.224 05:20:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:25:32.224 05:20:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:25:32.224 05:20:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 01:25:32.224 05:20:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 01:25:32.224 05:20:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:25:32.224 05:20:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:25:32.224 05:20:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 01:25:32.224 05:20:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 01:25:32.224 05:20:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 01:25:32.224 05:20:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:25:32.224 05:20:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 01:25:32.224 05:20:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 01:25:32.224 05:20:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 75810 01:25:32.224 05:20:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 01:25:32.224 05:20:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 01:25:32.224 05:20:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 01:25:32.224 05:20:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:25:32.224 05:20:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 01:25:32.224 05:20:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 01:25:32.224 05:20:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:25:32.224 rmmod nvme_tcp 01:25:32.224 rmmod nvme_fabrics 01:25:32.224 rmmod nvme_keyring 01:25:32.224 05:20:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:25:32.224 05:20:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 01:25:32.224 05:20:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 01:25:32.224 05:20:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 75778 ']' 01:25:32.224 05:20:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 75778 01:25:32.224 05:20:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 75778 ']' 01:25:32.224 05:20:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 75778 01:25:32.224 05:20:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 01:25:32.224 05:20:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:25:32.224 05:20:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75778 01:25:32.224 05:20:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:25:32.224 05:20:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:25:32.224 killing process with pid 75778 01:25:32.224 05:20:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75778' 01:25:32.224 05:20:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 75778 01:25:32.224 05:20:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 75778 01:25:32.790 05:20:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:25:32.790 05:20:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:25:32.790 05:20:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:25:32.790 05:20:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 01:25:32.790 05:20:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 01:25:32.790 05:20:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 01:25:32.790 05:20:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:25:32.790 05:20:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:25:32.790 05:20:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:25:32.790 05:20:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:25:32.790 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:25:32.790 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:25:32.790 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:25:32.790 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:25:32.790 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:25:32.790 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:25:32.790 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:25:32.790 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:25:32.790 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:25:32.790 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:25:32.790 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:25:32.790 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:25:32.790 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 01:25:32.790 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:25:32.790 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:25:32.790 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:25:33.047 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 01:25:33.047 01:25:33.047 real 0m10.254s 01:25:33.047 user 0m18.470s 01:25:33.047 sys 0m2.405s 01:25:33.047 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 01:25:33.047 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:25:33.047 ************************************ 01:25:33.047 END TEST nvmf_host_discovery 01:25:33.047 ************************************ 01:25:33.047 05:20:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 01:25:33.047 05:20:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:25:33.047 05:20:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 01:25:33.047 05:20:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 01:25:33.047 ************************************ 01:25:33.047 START TEST nvmf_host_multipath_status 01:25:33.047 ************************************ 01:25:33.047 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 01:25:33.047 * Looking for test storage... 01:25:33.047 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:25:33.047 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:25:33.047 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 01:25:33.047 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:25:33.306 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:25:33.306 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:25:33.306 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 01:25:33.306 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 01:25:33.306 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 01:25:33.306 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 01:25:33.306 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 01:25:33.306 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 01:25:33.306 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 01:25:33.306 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 01:25:33.306 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 01:25:33.306 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:25:33.306 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 01:25:33.306 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 01:25:33.306 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 01:25:33.306 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:25:33.306 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 01:25:33.306 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 01:25:33.306 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:25:33.306 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 01:25:33.306 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 01:25:33.306 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 01:25:33.306 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 01:25:33.306 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:25:33.306 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 01:25:33.306 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 01:25:33.306 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:25:33.306 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:25:33.306 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 01:25:33.306 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:25:33.306 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:25:33.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:25:33.306 --rc genhtml_branch_coverage=1 01:25:33.306 --rc genhtml_function_coverage=1 01:25:33.306 --rc genhtml_legend=1 01:25:33.306 --rc geninfo_all_blocks=1 01:25:33.306 --rc geninfo_unexecuted_blocks=1 01:25:33.306 01:25:33.306 ' 01:25:33.306 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:25:33.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:25:33.306 --rc genhtml_branch_coverage=1 01:25:33.306 --rc genhtml_function_coverage=1 01:25:33.306 --rc genhtml_legend=1 01:25:33.306 --rc geninfo_all_blocks=1 01:25:33.306 --rc geninfo_unexecuted_blocks=1 01:25:33.306 01:25:33.306 ' 01:25:33.306 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:25:33.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:25:33.306 --rc genhtml_branch_coverage=1 01:25:33.306 --rc genhtml_function_coverage=1 01:25:33.306 --rc genhtml_legend=1 01:25:33.306 --rc geninfo_all_blocks=1 01:25:33.306 --rc geninfo_unexecuted_blocks=1 01:25:33.306 01:25:33.306 ' 01:25:33.306 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:25:33.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:25:33.306 --rc genhtml_branch_coverage=1 01:25:33.306 --rc genhtml_function_coverage=1 01:25:33.306 --rc genhtml_legend=1 01:25:33.306 --rc geninfo_all_blocks=1 01:25:33.306 --rc geninfo_unexecuted_blocks=1 01:25:33.306 01:25:33.306 ' 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:25:33.307 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@460 -- # nvmf_veth_init 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:25:33.307 Cannot find device "nvmf_init_br" 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:25:33.307 Cannot find device "nvmf_init_br2" 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:25:33.307 Cannot find device "nvmf_tgt_br" 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:25:33.307 Cannot find device "nvmf_tgt_br2" 01:25:33.307 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 01:25:33.308 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:25:33.308 Cannot find device "nvmf_init_br" 01:25:33.308 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 01:25:33.308 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:25:33.308 Cannot find device "nvmf_init_br2" 01:25:33.308 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 01:25:33.308 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:25:33.308 Cannot find device "nvmf_tgt_br" 01:25:33.308 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 01:25:33.308 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:25:33.308 Cannot find device "nvmf_tgt_br2" 01:25:33.308 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 01:25:33.308 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:25:33.308 Cannot find device "nvmf_br" 01:25:33.308 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 01:25:33.308 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:25:33.566 Cannot find device "nvmf_init_if" 01:25:33.566 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 01:25:33.566 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:25:33.566 Cannot find device "nvmf_init_if2" 01:25:33.566 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 01:25:33.566 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:25:33.566 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:25:33.566 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 01:25:33.566 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:25:33.566 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:25:33.566 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 01:25:33.566 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:25:33.566 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:25:33.566 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:25:33.566 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:25:33.566 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:25:33.566 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:25:33.566 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:25:33.566 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:25:33.566 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:25:33.566 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:25:33.566 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:25:33.566 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:25:33.566 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:25:33.566 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:25:33.566 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:25:33.566 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:25:33.566 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:25:33.566 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:25:33.566 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:25:33.566 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:25:33.566 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:25:33.566 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:25:33.566 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:25:33.566 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:25:33.566 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:25:33.566 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:25:33.566 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:25:33.566 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:25:33.566 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:25:33.566 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:25:33.566 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:25:33.566 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:25:33.566 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:25:33.566 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:25:33.566 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 01:25:33.566 01:25:33.567 --- 10.0.0.3 ping statistics --- 01:25:33.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:25:33.567 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 01:25:33.567 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:25:33.567 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:25:33.567 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.063 ms 01:25:33.567 01:25:33.567 --- 10.0.0.4 ping statistics --- 01:25:33.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:25:33.567 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 01:25:33.567 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:25:33.567 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:25:33.567 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 01:25:33.567 01:25:33.567 --- 10.0.0.1 ping statistics --- 01:25:33.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:25:33.567 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 01:25:33.567 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:25:33.567 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:25:33.567 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 01:25:33.567 01:25:33.567 --- 10.0.0.2 ping statistics --- 01:25:33.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:25:33.567 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 01:25:33.567 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:25:33.567 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@461 -- # return 0 01:25:33.567 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:25:33.567 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:25:33.567 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:25:33.567 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:25:33.567 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:25:33.567 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:25:33.567 05:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:25:33.825 05:20:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 01:25:33.825 05:20:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:25:33.825 05:20:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 01:25:33.825 05:20:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 01:25:33.825 05:20:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=76322 01:25:33.825 05:20:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 01:25:33.825 05:20:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 76322 01:25:33.825 05:20:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 76322 ']' 01:25:33.825 05:20:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:25:33.825 05:20:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 01:25:33.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:25:33.825 05:20:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:25:33.825 05:20:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 01:25:33.825 05:20:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 01:25:33.825 [2024-12-09 05:20:16.088726] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:25:33.825 [2024-12-09 05:20:16.088784] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:25:33.825 [2024-12-09 05:20:16.242232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 01:25:34.086 [2024-12-09 05:20:16.292333] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:25:34.086 [2024-12-09 05:20:16.292409] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:25:34.086 [2024-12-09 05:20:16.292415] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:25:34.086 [2024-12-09 05:20:16.292419] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:25:34.086 [2024-12-09 05:20:16.292423] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:25:34.086 [2024-12-09 05:20:16.293285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:25:34.086 [2024-12-09 05:20:16.293287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:25:34.086 [2024-12-09 05:20:16.334742] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:25:34.669 05:20:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:25:34.669 05:20:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 01:25:34.669 05:20:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:25:34.669 05:20:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 01:25:34.669 05:20:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 01:25:34.669 05:20:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:25:34.669 05:20:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=76322 01:25:34.669 05:20:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 01:25:34.927 [2024-12-09 05:20:17.176435] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:25:34.927 05:20:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 01:25:34.927 Malloc0 01:25:35.185 05:20:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 01:25:35.185 05:20:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:25:35.443 05:20:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:25:35.703 [2024-12-09 05:20:17.959654] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:25:35.703 05:20:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 01:25:35.703 [2024-12-09 05:20:18.135415] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 01:25:35.703 05:20:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 01:25:35.703 05:20:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=76368 01:25:35.703 05:20:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:25:35.703 05:20:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 76368 /var/tmp/bdevperf.sock 01:25:35.703 05:20:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 76368 ']' 01:25:35.703 05:20:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:25:35.703 05:20:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 01:25:35.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:25:35.703 05:20:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:25:35.703 05:20:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 01:25:35.703 05:20:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 01:25:36.638 05:20:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:25:36.638 05:20:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 01:25:36.638 05:20:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 01:25:36.897 05:20:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 01:25:37.155 Nvme0n1 01:25:37.155 05:20:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 01:25:37.414 Nvme0n1 01:25:37.414 05:20:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 01:25:37.414 05:20:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 01:25:39.948 05:20:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 01:25:39.948 05:20:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 01:25:39.948 05:20:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 01:25:39.948 05:20:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 01:25:40.884 05:20:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 01:25:40.884 05:20:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 01:25:40.884 05:20:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:25:40.884 05:20:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 01:25:41.143 05:20:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:25:41.143 05:20:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 01:25:41.143 05:20:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:25:41.143 05:20:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 01:25:41.402 05:20:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:25:41.402 05:20:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 01:25:41.402 05:20:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:25:41.403 05:20:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 01:25:41.403 05:20:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:25:41.403 05:20:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 01:25:41.403 05:20:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 01:25:41.403 05:20:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:25:41.696 05:20:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:25:41.696 05:20:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 01:25:41.696 05:20:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:25:41.696 05:20:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 01:25:41.954 05:20:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:25:41.954 05:20:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 01:25:41.954 05:20:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 01:25:41.954 05:20:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:25:42.212 05:20:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:25:42.212 05:20:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 01:25:42.212 05:20:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 01:25:42.212 05:20:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 01:25:42.469 05:20:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 01:25:43.405 05:20:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 01:25:43.664 05:20:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 01:25:43.664 05:20:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:25:43.664 05:20:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 01:25:43.664 05:20:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:25:43.664 05:20:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 01:25:43.664 05:20:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:25:43.664 05:20:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 01:25:43.923 05:20:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:25:43.923 05:20:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 01:25:43.923 05:20:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 01:25:43.923 05:20:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:25:44.184 05:20:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:25:44.184 05:20:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 01:25:44.184 05:20:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:25:44.184 05:20:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 01:25:44.445 05:20:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:25:44.445 05:20:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 01:25:44.445 05:20:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:25:44.445 05:20:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 01:25:44.445 05:20:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:25:44.445 05:20:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 01:25:44.445 05:20:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:25:44.445 05:20:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 01:25:44.705 05:20:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:25:44.705 05:20:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 01:25:44.705 05:20:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 01:25:44.964 05:20:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 01:25:45.224 05:20:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 01:25:46.162 05:20:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 01:25:46.162 05:20:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 01:25:46.162 05:20:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 01:25:46.162 05:20:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:25:46.422 05:20:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:25:46.422 05:20:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 01:25:46.422 05:20:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 01:25:46.422 05:20:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:25:46.681 05:20:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:25:46.681 05:20:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 01:25:46.681 05:20:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 01:25:46.681 05:20:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:25:46.681 05:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:25:46.681 05:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 01:25:46.681 05:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:25:46.681 05:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 01:25:46.941 05:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:25:46.941 05:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 01:25:46.941 05:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:25:46.941 05:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 01:25:47.201 05:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:25:47.201 05:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 01:25:47.201 05:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:25:47.201 05:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 01:25:47.460 05:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:25:47.460 05:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 01:25:47.460 05:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 01:25:47.461 05:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 01:25:47.720 05:20:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 01:25:49.100 05:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 01:25:49.100 05:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 01:25:49.100 05:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:25:49.100 05:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 01:25:49.100 05:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:25:49.100 05:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 01:25:49.100 05:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:25:49.100 05:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 01:25:49.359 05:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:25:49.359 05:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 01:25:49.359 05:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:25:49.359 05:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 01:25:49.359 05:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:25:49.359 05:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 01:25:49.359 05:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:25:49.359 05:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 01:25:49.617 05:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:25:49.617 05:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 01:25:49.617 05:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 01:25:49.617 05:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:25:49.877 05:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:25:49.877 05:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 01:25:49.877 05:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 01:25:49.877 05:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:25:49.877 05:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:25:49.877 05:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 01:25:49.877 05:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 01:25:50.136 05:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 01:25:50.395 05:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 01:25:51.334 05:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 01:25:51.334 05:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 01:25:51.334 05:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:25:51.334 05:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 01:25:51.593 05:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:25:51.593 05:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 01:25:51.593 05:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:25:51.593 05:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 01:25:51.852 05:20:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:25:51.852 05:20:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 01:25:51.852 05:20:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 01:25:51.852 05:20:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:25:52.111 05:20:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:25:52.111 05:20:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 01:25:52.111 05:20:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:25:52.111 05:20:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 01:25:52.111 05:20:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:25:52.111 05:20:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 01:25:52.111 05:20:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:25:52.111 05:20:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 01:25:52.369 05:20:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:25:52.369 05:20:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 01:25:52.369 05:20:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:25:52.369 05:20:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 01:25:52.628 05:20:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:25:52.628 05:20:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 01:25:52.628 05:20:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 01:25:52.886 05:20:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 01:25:53.145 05:20:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 01:25:54.093 05:20:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 01:25:54.093 05:20:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 01:25:54.093 05:20:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:25:54.093 05:20:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 01:25:54.368 05:20:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:25:54.368 05:20:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 01:25:54.368 05:20:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:25:54.368 05:20:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 01:25:54.368 05:20:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:25:54.368 05:20:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 01:25:54.368 05:20:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:25:54.368 05:20:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 01:25:54.628 05:20:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:25:54.629 05:20:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 01:25:54.629 05:20:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:25:54.629 05:20:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 01:25:54.888 05:20:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:25:54.888 05:20:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 01:25:54.888 05:20:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:25:54.888 05:20:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 01:25:55.147 05:20:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:25:55.147 05:20:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 01:25:55.147 05:20:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:25:55.147 05:20:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 01:25:55.147 05:20:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:25:55.147 05:20:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 01:25:55.407 05:20:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 01:25:55.407 05:20:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 01:25:55.665 05:20:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 01:25:55.924 05:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 01:25:56.860 05:20:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 01:25:56.860 05:20:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 01:25:56.860 05:20:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:25:56.860 05:20:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 01:25:57.119 05:20:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:25:57.119 05:20:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 01:25:57.119 05:20:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:25:57.119 05:20:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 01:25:57.119 05:20:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:25:57.119 05:20:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 01:25:57.119 05:20:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:25:57.119 05:20:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 01:25:57.378 05:20:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:25:57.378 05:20:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 01:25:57.378 05:20:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:25:57.378 05:20:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 01:25:57.636 05:20:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:25:57.636 05:20:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 01:25:57.636 05:20:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:25:57.636 05:20:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 01:25:57.895 05:20:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:25:57.895 05:20:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 01:25:57.895 05:20:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:25:57.895 05:20:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 01:25:57.895 05:20:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:25:57.895 05:20:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 01:25:57.895 05:20:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 01:25:58.154 05:20:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 01:25:58.413 05:20:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 01:25:59.349 05:20:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 01:25:59.349 05:20:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 01:25:59.350 05:20:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:25:59.350 05:20:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 01:25:59.609 05:20:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:25:59.609 05:20:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 01:25:59.609 05:20:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:25:59.609 05:20:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 01:25:59.869 05:20:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:25:59.869 05:20:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 01:25:59.869 05:20:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:25:59.869 05:20:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 01:26:00.128 05:20:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:26:00.128 05:20:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 01:26:00.128 05:20:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 01:26:00.128 05:20:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:26:00.128 05:20:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:26:00.128 05:20:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 01:26:00.128 05:20:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 01:26:00.128 05:20:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:26:00.386 05:20:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:26:00.386 05:20:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 01:26:00.386 05:20:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:26:00.386 05:20:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 01:26:00.645 05:20:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:26:00.645 05:20:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 01:26:00.645 05:20:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 01:26:00.903 05:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 01:26:01.162 05:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 01:26:02.100 05:20:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 01:26:02.100 05:20:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 01:26:02.100 05:20:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:26:02.100 05:20:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 01:26:02.359 05:20:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:26:02.359 05:20:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 01:26:02.359 05:20:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:26:02.359 05:20:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 01:26:02.359 05:20:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:26:02.359 05:20:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 01:26:02.359 05:20:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:26:02.359 05:20:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 01:26:02.618 05:20:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:26:02.618 05:20:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 01:26:02.618 05:20:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:26:02.618 05:20:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 01:26:02.877 05:20:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:26:02.877 05:20:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 01:26:02.877 05:20:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:26:02.877 05:20:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 01:26:03.136 05:20:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:26:03.136 05:20:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 01:26:03.136 05:20:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 01:26:03.136 05:20:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:26:03.394 05:20:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:26:03.394 05:20:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 01:26:03.394 05:20:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 01:26:03.394 05:20:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 01:26:03.653 05:20:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 01:26:04.590 05:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 01:26:04.590 05:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 01:26:04.590 05:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:26:04.590 05:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 01:26:04.850 05:20:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:26:04.850 05:20:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 01:26:04.850 05:20:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:26:04.850 05:20:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 01:26:05.114 05:20:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:26:05.114 05:20:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 01:26:05.114 05:20:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 01:26:05.114 05:20:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:26:05.379 05:20:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:26:05.379 05:20:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 01:26:05.379 05:20:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:26:05.379 05:20:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 01:26:05.379 05:20:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:26:05.379 05:20:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 01:26:05.379 05:20:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:26:05.645 05:20:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 01:26:05.645 05:20:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:26:05.645 05:20:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 01:26:05.645 05:20:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 01:26:05.645 05:20:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:26:05.903 05:20:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:26:05.903 05:20:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 76368 01:26:05.903 05:20:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 76368 ']' 01:26:05.903 05:20:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 76368 01:26:05.903 05:20:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 01:26:05.903 05:20:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:26:05.903 05:20:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76368 01:26:05.903 killing process with pid 76368 01:26:05.903 05:20:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 01:26:05.903 05:20:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 01:26:05.903 05:20:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76368' 01:26:05.903 05:20:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 76368 01:26:05.904 05:20:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 76368 01:26:05.904 { 01:26:05.904 "results": [ 01:26:05.904 { 01:26:05.904 "job": "Nvme0n1", 01:26:05.904 "core_mask": "0x4", 01:26:05.904 "workload": "verify", 01:26:05.904 "status": "terminated", 01:26:05.904 "verify_range": { 01:26:05.904 "start": 0, 01:26:05.904 "length": 16384 01:26:05.904 }, 01:26:05.904 "queue_depth": 128, 01:26:05.904 "io_size": 4096, 01:26:05.904 "runtime": 28.461985, 01:26:05.904 "iops": 8290.672628771325, 01:26:05.904 "mibps": 32.385439956137986, 01:26:05.904 "io_failed": 0, 01:26:05.904 "io_timeout": 0, 01:26:05.904 "avg_latency_us": 15417.123981702802, 01:26:05.904 "min_latency_us": 116.26200873362446, 01:26:05.904 "max_latency_us": 3018433.6209606985 01:26:05.904 } 01:26:05.904 ], 01:26:05.904 "core_count": 1 01:26:05.904 } 01:26:06.166 05:20:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 76368 01:26:06.166 05:20:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 01:26:06.166 [2024-12-09 05:20:18.180415] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:26:06.166 [2024-12-09 05:20:18.180480] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76368 ] 01:26:06.166 [2024-12-09 05:20:18.309260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:26:06.166 [2024-12-09 05:20:18.360457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:26:06.166 [2024-12-09 05:20:18.401987] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:26:06.166 Running I/O for 90 seconds... 01:26:06.166 8963.00 IOPS, 35.01 MiB/s [2024-12-09T05:20:48.622Z] 9304.00 IOPS, 36.34 MiB/s [2024-12-09T05:20:48.622Z] 9701.33 IOPS, 37.90 MiB/s [2024-12-09T05:20:48.622Z] 9890.00 IOPS, 38.63 MiB/s [2024-12-09T05:20:48.622Z] 9947.40 IOPS, 38.86 MiB/s [2024-12-09T05:20:48.622Z] 9775.50 IOPS, 38.19 MiB/s [2024-12-09T05:20:48.622Z] 9640.71 IOPS, 37.66 MiB/s [2024-12-09T05:20:48.622Z] 9612.00 IOPS, 37.55 MiB/s [2024-12-09T05:20:48.622Z] 9553.78 IOPS, 37.32 MiB/s [2024-12-09T05:20:48.622Z] 9468.80 IOPS, 36.99 MiB/s [2024-12-09T05:20:48.622Z] 9410.91 IOPS, 36.76 MiB/s [2024-12-09T05:20:48.622Z] 9423.33 IOPS, 36.81 MiB/s [2024-12-09T05:20:48.622Z] [2024-12-09 05:20:32.517452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:43264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.166 [2024-12-09 05:20:32.517531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:26:06.166 [2024-12-09 05:20:32.517582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:43272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.166 [2024-12-09 05:20:32.517592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001f p:0 m:0 dnr:0 01:26:06.166 [2024-12-09 05:20:32.517607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:43280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.166 [2024-12-09 05:20:32.517616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 01:26:06.166 [2024-12-09 05:20:32.517630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:43288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.166 [2024-12-09 05:20:32.517638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:26:06.166 [2024-12-09 05:20:32.517653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:43296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.166 [2024-12-09 05:20:32.517661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:26:06.166 [2024-12-09 05:20:32.517675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:43304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.166 [2024-12-09 05:20:32.517683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:26:06.166 [2024-12-09 05:20:32.517696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:43312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.166 [2024-12-09 05:20:32.517704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:26:06.166 [2024-12-09 05:20:32.517718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:43320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.166 [2024-12-09 05:20:32.517727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:26:06.166 [2024-12-09 05:20:32.518288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:43328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.166 [2024-12-09 05:20:32.518307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:26:06.166 [2024-12-09 05:20:32.518369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:43336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.166 [2024-12-09 05:20:32.518378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:26:06.166 [2024-12-09 05:20:32.518393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:43344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.166 [2024-12-09 05:20:32.518402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:26:06.166 [2024-12-09 05:20:32.518416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:43352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.166 [2024-12-09 05:20:32.518426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:26:06.166 [2024-12-09 05:20:32.518441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:43360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.166 [2024-12-09 05:20:32.518451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:26:06.166 [2024-12-09 05:20:32.518468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:43368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.166 [2024-12-09 05:20:32.518477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:26:06.166 [2024-12-09 05:20:32.518494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:43376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.166 [2024-12-09 05:20:32.518504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:26:06.166 [2024-12-09 05:20:32.518519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:43384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.166 [2024-12-09 05:20:32.518528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:26:06.166 [2024-12-09 05:20:32.518543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:42752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.166 [2024-12-09 05:20:32.518553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:26:06.166 [2024-12-09 05:20:32.518573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:42760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.166 [2024-12-09 05:20:32.518584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:26:06.166 [2024-12-09 05:20:32.518598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:42768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.166 [2024-12-09 05:20:32.518608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 01:26:06.166 [2024-12-09 05:20:32.518623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:42776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.166 [2024-12-09 05:20:32.518633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:26:06.166 [2024-12-09 05:20:32.518648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:42784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.166 [2024-12-09 05:20:32.518658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:26:06.166 [2024-12-09 05:20:32.518674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:42792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.166 [2024-12-09 05:20:32.518693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:26:06.166 [2024-12-09 05:20:32.518709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:42800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.166 [2024-12-09 05:20:32.518721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:26:06.166 [2024-12-09 05:20:32.518738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:42808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.166 [2024-12-09 05:20:32.518749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:26:06.166 [2024-12-09 05:20:32.518766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:42816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.166 [2024-12-09 05:20:32.518777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:26:06.166 [2024-12-09 05:20:32.518793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:42824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.166 [2024-12-09 05:20:32.518804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 01:26:06.166 [2024-12-09 05:20:32.518820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:42832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.167 [2024-12-09 05:20:32.518829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 01:26:06.167 [2024-12-09 05:20:32.518842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:42840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.167 [2024-12-09 05:20:32.518851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:26:06.167 [2024-12-09 05:20:32.518867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:42848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.167 [2024-12-09 05:20:32.518877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:26:06.167 [2024-12-09 05:20:32.518891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:42856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.167 [2024-12-09 05:20:32.518902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:26:06.167 [2024-12-09 05:20:32.518917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.167 [2024-12-09 05:20:32.518925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003c p:0 m:0 dnr:0 01:26:06.167 [2024-12-09 05:20:32.518939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:42872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.167 [2024-12-09 05:20:32.518948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:26:06.167 [2024-12-09 05:20:32.518962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:42880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.167 [2024-12-09 05:20:32.518970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:26:06.167 [2024-12-09 05:20:32.518988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:42888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.167 [2024-12-09 05:20:32.519005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:26:06.167 [2024-12-09 05:20:32.519020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:42896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.167 [2024-12-09 05:20:32.519030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:26:06.167 [2024-12-09 05:20:32.519047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:42904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.167 [2024-12-09 05:20:32.519056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:26:06.167 [2024-12-09 05:20:32.519070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:42912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.167 [2024-12-09 05:20:32.519078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:26:06.167 [2024-12-09 05:20:32.519093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:42920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.167 [2024-12-09 05:20:32.519101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:26:06.167 [2024-12-09 05:20:32.519117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:42928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.167 [2024-12-09 05:20:32.519128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:26:06.167 [2024-12-09 05:20:32.519152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:42936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.167 [2024-12-09 05:20:32.519177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:26:06.167 [2024-12-09 05:20:32.519200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:43392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.167 [2024-12-09 05:20:32.519210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:26:06.167 [2024-12-09 05:20:32.519225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:43400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.167 [2024-12-09 05:20:32.519234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:26:06.167 [2024-12-09 05:20:32.519248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:43408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.167 [2024-12-09 05:20:32.519258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:26:06.167 [2024-12-09 05:20:32.519274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:43416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.167 [2024-12-09 05:20:32.519283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:26:06.167 [2024-12-09 05:20:32.519299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:43424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.167 [2024-12-09 05:20:32.519308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:26:06.167 [2024-12-09 05:20:32.519323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:43432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.167 [2024-12-09 05:20:32.519332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:26:06.167 [2024-12-09 05:20:32.519364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:43440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.167 [2024-12-09 05:20:32.519382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004c p:0 m:0 dnr:0 01:26:06.167 [2024-12-09 05:20:32.519398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:43448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.167 [2024-12-09 05:20:32.519407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:26:06.167 [2024-12-09 05:20:32.519422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:42944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.167 [2024-12-09 05:20:32.519432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:26:06.167 [2024-12-09 05:20:32.519446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:42952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.167 [2024-12-09 05:20:32.519458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:26:06.167 [2024-12-09 05:20:32.519474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:42960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.167 [2024-12-09 05:20:32.519484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:26:06.167 [2024-12-09 05:20:32.519501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:42968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.167 [2024-12-09 05:20:32.519511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:26:06.167 [2024-12-09 05:20:32.519525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:42976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.167 [2024-12-09 05:20:32.519535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:26:06.167 [2024-12-09 05:20:32.519549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:42984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.167 [2024-12-09 05:20:32.519559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:26:06.167 [2024-12-09 05:20:32.519574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:42992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.167 [2024-12-09 05:20:32.519583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:26:06.167 [2024-12-09 05:20:32.519597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:43000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.167 [2024-12-09 05:20:32.519606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:26:06.167 [2024-12-09 05:20:32.519619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:43008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.167 [2024-12-09 05:20:32.519631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:26:06.167 [2024-12-09 05:20:32.519645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:43016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.167 [2024-12-09 05:20:32.519656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:26:06.167 [2024-12-09 05:20:32.519675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:43024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.167 [2024-12-09 05:20:32.519685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:26:06.167 [2024-12-09 05:20:32.519699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:43032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.167 [2024-12-09 05:20:32.519709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:26:06.167 [2024-12-09 05:20:32.519722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:43040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.167 [2024-12-09 05:20:32.519734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:26:06.167 [2024-12-09 05:20:32.519750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:43048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.167 [2024-12-09 05:20:32.519760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:26:06.167 [2024-12-09 05:20:32.519774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:43056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.167 [2024-12-09 05:20:32.519786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:26:06.167 [2024-12-09 05:20:32.519801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:43064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.167 [2024-12-09 05:20:32.519809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:26:06.167 [2024-12-09 05:20:32.519930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:43456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.167 [2024-12-09 05:20:32.519943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:26:06.168 [2024-12-09 05:20:32.519962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:43464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.168 [2024-12-09 05:20:32.519972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:26:06.168 [2024-12-09 05:20:32.519989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:43472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.168 [2024-12-09 05:20:32.520000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:26:06.168 [2024-12-09 05:20:32.520016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:43480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.168 [2024-12-09 05:20:32.520025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:26:06.168 [2024-12-09 05:20:32.520043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:43488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.168 [2024-12-09 05:20:32.520052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:26:06.168 [2024-12-09 05:20:32.520069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:43496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.168 [2024-12-09 05:20:32.520080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:26:06.168 [2024-12-09 05:20:32.520103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:43504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.168 [2024-12-09 05:20:32.520113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 01:26:06.168 [2024-12-09 05:20:32.520130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:43512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.168 [2024-12-09 05:20:32.520140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:26:06.168 [2024-12-09 05:20:32.520157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:43520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.168 [2024-12-09 05:20:32.520167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:26:06.168 [2024-12-09 05:20:32.520185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:43528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.168 [2024-12-09 05:20:32.520195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:26:06.168 [2024-12-09 05:20:32.520212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:43536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.168 [2024-12-09 05:20:32.520222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:26:06.168 [2024-12-09 05:20:32.520239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:43544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.168 [2024-12-09 05:20:32.520248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:26:06.168 [2024-12-09 05:20:32.520265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:43552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.168 [2024-12-09 05:20:32.520274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:26:06.168 [2024-12-09 05:20:32.520296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:43560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.168 [2024-12-09 05:20:32.520306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:26:06.168 [2024-12-09 05:20:32.520333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:43568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.168 [2024-12-09 05:20:32.520346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:26:06.168 [2024-12-09 05:20:32.520363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:43576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.168 [2024-12-09 05:20:32.520373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:26:06.168 [2024-12-09 05:20:32.520390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:43072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.168 [2024-12-09 05:20:32.520399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:26:06.168 [2024-12-09 05:20:32.520418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:43080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.168 [2024-12-09 05:20:32.520427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:26:06.168 [2024-12-09 05:20:32.520444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:43088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.168 [2024-12-09 05:20:32.520464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:26:06.168 [2024-12-09 05:20:32.520483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:43096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.168 [2024-12-09 05:20:32.520493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:26:06.168 [2024-12-09 05:20:32.520511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:43104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.168 [2024-12-09 05:20:32.520521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:26:06.168 [2024-12-09 05:20:32.520548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:43112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.168 [2024-12-09 05:20:32.520557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:26:06.168 [2024-12-09 05:20:32.520573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:43120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.168 [2024-12-09 05:20:32.520582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:26:06.168 [2024-12-09 05:20:32.520598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:43128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.168 [2024-12-09 05:20:32.520608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:26:06.168 [2024-12-09 05:20:32.520626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:43584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.168 [2024-12-09 05:20:32.520636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:26:06.168 [2024-12-09 05:20:32.520652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:43592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.168 [2024-12-09 05:20:32.520661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 01:26:06.168 [2024-12-09 05:20:32.520677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:43600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.168 [2024-12-09 05:20:32.520686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:26:06.168 [2024-12-09 05:20:32.520702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:43608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.168 [2024-12-09 05:20:32.520710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:26:06.168 [2024-12-09 05:20:32.520728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:43616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.168 [2024-12-09 05:20:32.520738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:26:06.168 [2024-12-09 05:20:32.520757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:43624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.168 [2024-12-09 05:20:32.520767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:26:06.168 [2024-12-09 05:20:32.520784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:43632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.168 [2024-12-09 05:20:32.520804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:26:06.168 [2024-12-09 05:20:32.520822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:43640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.168 [2024-12-09 05:20:32.520833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:26:06.168 [2024-12-09 05:20:32.520851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:43648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.168 [2024-12-09 05:20:32.520860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:26:06.168 [2024-12-09 05:20:32.520878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:43656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.168 [2024-12-09 05:20:32.520887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:26:06.168 [2024-12-09 05:20:32.520903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:43664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.168 [2024-12-09 05:20:32.520912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:26:06.168 [2024-12-09 05:20:32.520929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:43672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.168 [2024-12-09 05:20:32.520938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:26:06.168 [2024-12-09 05:20:32.520957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:43680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.168 [2024-12-09 05:20:32.520966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:26:06.168 [2024-12-09 05:20:32.520984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:43688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.168 [2024-12-09 05:20:32.520992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 01:26:06.168 [2024-12-09 05:20:32.521008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:43696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.168 [2024-12-09 05:20:32.521017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 01:26:06.168 [2024-12-09 05:20:32.521033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:43704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.168 [2024-12-09 05:20:32.521043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 01:26:06.169 [2024-12-09 05:20:32.521062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:43136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.169 [2024-12-09 05:20:32.521070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 01:26:06.169 [2024-12-09 05:20:32.521088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:43144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.169 [2024-12-09 05:20:32.521096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 01:26:06.169 [2024-12-09 05:20:32.521113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:43152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.169 [2024-12-09 05:20:32.521122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 01:26:06.169 [2024-12-09 05:20:32.521143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:43160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.169 [2024-12-09 05:20:32.521154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:26:06.169 [2024-12-09 05:20:32.521171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:43168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.169 [2024-12-09 05:20:32.521180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000a p:0 m:0 dnr:0 01:26:06.169 [2024-12-09 05:20:32.521199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:43176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.169 [2024-12-09 05:20:32.521208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000b p:0 m:0 dnr:0 01:26:06.169 [2024-12-09 05:20:32.521225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:43184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.169 [2024-12-09 05:20:32.521236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000c p:0 m:0 dnr:0 01:26:06.169 [2024-12-09 05:20:32.521254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:43192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.169 [2024-12-09 05:20:32.521263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:26:06.169 [2024-12-09 05:20:32.521367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:43712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.169 [2024-12-09 05:20:32.521381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:26:06.169 [2024-12-09 05:20:32.521401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:43720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.169 [2024-12-09 05:20:32.521411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000f p:0 m:0 dnr:0 01:26:06.169 [2024-12-09 05:20:32.521432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:43728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.169 [2024-12-09 05:20:32.521442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 01:26:06.169 [2024-12-09 05:20:32.521463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:43736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.169 [2024-12-09 05:20:32.521473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 01:26:06.169 [2024-12-09 05:20:32.521492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:43744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.169 [2024-12-09 05:20:32.521502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 01:26:06.169 [2024-12-09 05:20:32.521522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:43752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.169 [2024-12-09 05:20:32.521532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:26:06.169 [2024-12-09 05:20:32.521551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:43760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.169 [2024-12-09 05:20:32.521560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 01:26:06.169 [2024-12-09 05:20:32.521584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:43768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.169 [2024-12-09 05:20:32.521594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 01:26:06.169 [2024-12-09 05:20:32.521614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:43200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.169 [2024-12-09 05:20:32.521624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:26:06.169 [2024-12-09 05:20:32.521642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:43208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.169 [2024-12-09 05:20:32.521652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:26:06.169 [2024-12-09 05:20:32.521670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:43216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.169 [2024-12-09 05:20:32.521679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:26:06.169 [2024-12-09 05:20:32.521698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:43224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.169 [2024-12-09 05:20:32.521708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:26:06.169 [2024-12-09 05:20:32.521728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:43232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.169 [2024-12-09 05:20:32.521737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:26:06.169 [2024-12-09 05:20:32.521760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:43240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.169 [2024-12-09 05:20:32.521769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:26:06.169 [2024-12-09 05:20:32.521789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:43248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.169 [2024-12-09 05:20:32.521799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:26:06.169 [2024-12-09 05:20:32.521817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:43256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.169 [2024-12-09 05:20:32.521829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:26:06.169 9233.23 IOPS, 36.07 MiB/s [2024-12-09T05:20:48.625Z] 8573.71 IOPS, 33.49 MiB/s [2024-12-09T05:20:48.625Z] 8002.13 IOPS, 31.26 MiB/s [2024-12-09T05:20:48.625Z] 7690.44 IOPS, 30.04 MiB/s [2024-12-09T05:20:48.625Z] 7761.82 IOPS, 30.32 MiB/s [2024-12-09T05:20:48.625Z] 7823.06 IOPS, 30.56 MiB/s [2024-12-09T05:20:48.625Z] 7869.68 IOPS, 30.74 MiB/s [2024-12-09T05:20:48.625Z] 7912.35 IOPS, 30.91 MiB/s [2024-12-09T05:20:48.625Z] 7949.95 IOPS, 31.05 MiB/s [2024-12-09T05:20:48.625Z] 7993.32 IOPS, 31.22 MiB/s [2024-12-09T05:20:48.625Z] 8025.61 IOPS, 31.35 MiB/s [2024-12-09T05:20:48.625Z] 8055.38 IOPS, 31.47 MiB/s [2024-12-09T05:20:48.625Z] 8077.56 IOPS, 31.55 MiB/s [2024-12-09T05:20:48.625Z] 8098.69 IOPS, 31.64 MiB/s [2024-12-09T05:20:48.625Z] [2024-12-09 05:20:45.970549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:121808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.169 [2024-12-09 05:20:45.970611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:26:06.169 [2024-12-09 05:20:45.970652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:121824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.169 [2024-12-09 05:20:45.970663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:26:06.169 [2024-12-09 05:20:45.970705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:120880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.169 [2024-12-09 05:20:45.970714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:26:06.169 [2024-12-09 05:20:45.970728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:120912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.169 [2024-12-09 05:20:45.970737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:26:06.169 [2024-12-09 05:20:45.970750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:120944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.169 [2024-12-09 05:20:45.970759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:26:06.169 [2024-12-09 05:20:45.970772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:120976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.169 [2024-12-09 05:20:45.970780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:26:06.169 [2024-12-09 05:20:45.970793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:121752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.169 [2024-12-09 05:20:45.970802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:26:06.169 [2024-12-09 05:20:45.970815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:121784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.169 [2024-12-09 05:20:45.970824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:26:06.169 [2024-12-09 05:20:45.970837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:121848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.169 [2024-12-09 05:20:45.970846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001f p:0 m:0 dnr:0 01:26:06.169 [2024-12-09 05:20:45.970859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:121800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.169 [2024-12-09 05:20:45.970867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 01:26:06.169 [2024-12-09 05:20:45.970880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:121832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.169 [2024-12-09 05:20:45.970889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:26:06.169 [2024-12-09 05:20:45.970902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:121872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.169 [2024-12-09 05:20:45.970910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:26:06.169 [2024-12-09 05:20:45.970924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:121888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.170 [2024-12-09 05:20:45.970933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:26:06.170 [2024-12-09 05:20:45.970946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:121904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.170 [2024-12-09 05:20:45.970956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:26:06.170 [2024-12-09 05:20:45.970975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:121920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.170 [2024-12-09 05:20:45.970985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:26:06.170 [2024-12-09 05:20:45.970998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:121936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.170 [2024-12-09 05:20:45.971007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:26:06.170 [2024-12-09 05:20:45.971352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:121952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.170 [2024-12-09 05:20:45.971375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:26:06.170 [2024-12-09 05:20:45.971396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:121000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.170 [2024-12-09 05:20:45.971406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:26:06.170 [2024-12-09 05:20:45.971420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:121032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.170 [2024-12-09 05:20:45.971429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:26:06.170 [2024-12-09 05:20:45.971443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:121064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.170 [2024-12-09 05:20:45.971451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:26:06.170 [2024-12-09 05:20:45.971465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:121096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.170 [2024-12-09 05:20:45.971473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:26:06.170 [2024-12-09 05:20:45.971491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:121840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.170 [2024-12-09 05:20:45.971500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:26:06.170 [2024-12-09 05:20:45.971513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:121864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.170 [2024-12-09 05:20:45.971522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:26:06.170 [2024-12-09 05:20:45.971535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:121896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.170 [2024-12-09 05:20:45.971544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:26:06.170 [2024-12-09 05:20:45.971559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:121912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.170 [2024-12-09 05:20:45.971568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:26:06.170 [2024-12-09 05:20:45.971582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:121944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.170 [2024-12-09 05:20:45.971592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 01:26:06.170 [2024-12-09 05:20:45.971606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:121968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.170 [2024-12-09 05:20:45.971625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:26:06.170 [2024-12-09 05:20:45.971640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:121984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.170 [2024-12-09 05:20:45.971649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:26:06.170 [2024-12-09 05:20:45.971662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:122000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.170 [2024-12-09 05:20:45.971671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:26:06.170 [2024-12-09 05:20:45.971686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:122016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.170 [2024-12-09 05:20:45.971695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:26:06.170 [2024-12-09 05:20:45.971709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:121992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.170 [2024-12-09 05:20:45.971718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:26:06.170 [2024-12-09 05:20:45.971732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:122024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.170 [2024-12-09 05:20:45.971741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:26:06.170 [2024-12-09 05:20:45.971756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:122040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.170 [2024-12-09 05:20:45.971768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 01:26:06.170 [2024-12-09 05:20:45.971785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:122056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.170 [2024-12-09 05:20:45.971795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 01:26:06.170 [2024-12-09 05:20:45.971812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:122072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.170 [2024-12-09 05:20:45.971822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:26:06.170 [2024-12-09 05:20:45.973047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:122088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.170 [2024-12-09 05:20:45.973071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:26:06.170 [2024-12-09 05:20:45.973090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:121136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.170 [2024-12-09 05:20:45.973100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:26:06.170 [2024-12-09 05:20:45.973115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:121168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.170 [2024-12-09 05:20:45.973124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003c p:0 m:0 dnr:0 01:26:06.170 [2024-12-09 05:20:45.973138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:121200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.170 [2024-12-09 05:20:45.973158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:26:06.170 [2024-12-09 05:20:45.973172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:121232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.170 [2024-12-09 05:20:45.973181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:26:06.170 [2024-12-09 05:20:45.973198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:122104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.170 [2024-12-09 05:20:45.973206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:26:06.170 [2024-12-09 05:20:45.973221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:121256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.170 [2024-12-09 05:20:45.973230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:26:06.170 [2024-12-09 05:20:45.973244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:122064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.170 [2024-12-09 05:20:45.973253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:26:06.170 [2024-12-09 05:20:45.973267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:122096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.170 [2024-12-09 05:20:45.973275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:26:06.170 [2024-12-09 05:20:45.973289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:122120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.170 [2024-12-09 05:20:45.973298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:26:06.170 [2024-12-09 05:20:45.973314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:122136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.170 [2024-12-09 05:20:45.973332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:26:06.171 [2024-12-09 05:20:45.973347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:122128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.171 [2024-12-09 05:20:45.973356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:26:06.171 [2024-12-09 05:20:45.973370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:122152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.171 [2024-12-09 05:20:45.973379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:26:06.171 [2024-12-09 05:20:45.973393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:122168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.171 [2024-12-09 05:20:45.973402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:26:06.171 [2024-12-09 05:20:45.973416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:122184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.171 [2024-12-09 05:20:45.973426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:26:06.171 [2024-12-09 05:20:45.973440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:122200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.171 [2024-12-09 05:20:45.973449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:26:06.171 [2024-12-09 05:20:45.974719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:122216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.171 [2024-12-09 05:20:45.974740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:26:06.171 [2024-12-09 05:20:45.974757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:122232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.171 [2024-12-09 05:20:45.974768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:26:06.171 [2024-12-09 05:20:45.974783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:122248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.171 [2024-12-09 05:20:45.974793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004c p:0 m:0 dnr:0 01:26:06.171 [2024-12-09 05:20:45.974807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:122264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.171 [2024-12-09 05:20:45.974816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:26:06.171 [2024-12-09 05:20:45.974830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:121288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.171 [2024-12-09 05:20:45.974839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:26:06.171 [2024-12-09 05:20:45.974853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:121320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.171 [2024-12-09 05:20:45.974862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:26:06.171 [2024-12-09 05:20:45.974877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:121352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.171 [2024-12-09 05:20:45.974887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:26:06.171 [2024-12-09 05:20:45.974901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:122160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.171 [2024-12-09 05:20:45.974910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:26:06.171 [2024-12-09 05:20:45.974924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:121400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.171 [2024-12-09 05:20:45.974932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:26:06.171 [2024-12-09 05:20:45.974947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:122192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.171 [2024-12-09 05:20:45.974956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:26:06.171 [2024-12-09 05:20:45.974970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:122224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.171 [2024-12-09 05:20:45.974979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:26:06.171 [2024-12-09 05:20:45.974995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:122256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.171 [2024-12-09 05:20:45.975004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:26:06.171 [2024-12-09 05:20:45.975028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:122280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.171 [2024-12-09 05:20:45.975038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:26:06.171 [2024-12-09 05:20:45.975052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:122296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.171 [2024-12-09 05:20:45.975061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:26:06.171 [2024-12-09 05:20:45.975074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:122312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.171 [2024-12-09 05:20:45.975084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:26:06.171 [2024-12-09 05:20:45.975099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:122328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.171 [2024-12-09 05:20:45.975108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:26:06.171 [2024-12-09 05:20:45.976220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:122344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.171 [2024-12-09 05:20:45.976239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:26:06.171 [2024-12-09 05:20:45.976256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:122360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.171 [2024-12-09 05:20:45.976265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:26:06.171 [2024-12-09 05:20:45.976280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:122376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.171 [2024-12-09 05:20:45.976289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:26:06.171 [2024-12-09 05:20:45.976303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:121432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.171 [2024-12-09 05:20:45.976313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:26:06.171 [2024-12-09 05:20:45.976337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:121464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.171 [2024-12-09 05:20:45.976346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:26:06.171 [2024-12-09 05:20:45.976360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:121496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.171 [2024-12-09 05:20:45.976370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:26:06.171 [2024-12-09 05:20:45.976384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:122288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.171 [2024-12-09 05:20:45.976394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:26:06.171 [2024-12-09 05:20:45.976409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:122384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.171 [2024-12-09 05:20:45.976421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:26:06.171 [2024-12-09 05:20:45.976447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:122400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.171 [2024-12-09 05:20:45.976457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:26:06.171 [2024-12-09 05:20:45.976471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:121528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.171 [2024-12-09 05:20:45.976480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:26:06.171 [2024-12-09 05:20:45.976495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:122336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.171 [2024-12-09 05:20:45.976504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 01:26:06.171 [2024-12-09 05:20:45.976518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:122368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.171 [2024-12-09 05:20:45.976528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:26:06.171 [2024-12-09 05:20:45.976542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:122408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.171 [2024-12-09 05:20:45.976551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:26:06.171 [2024-12-09 05:20:45.976566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:122424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.171 [2024-12-09 05:20:45.976575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:26:06.171 [2024-12-09 05:20:45.976588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:122440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.171 [2024-12-09 05:20:45.976598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:26:06.171 [2024-12-09 05:20:45.976611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:122456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.171 [2024-12-09 05:20:45.976620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:26:06.171 [2024-12-09 05:20:45.977687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:122472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.171 [2024-12-09 05:20:45.977708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:26:06.171 [2024-12-09 05:20:45.977725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:122488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.171 [2024-12-09 05:20:45.977734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:26:06.172 [2024-12-09 05:20:45.977747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:122504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.172 [2024-12-09 05:20:45.977757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:26:06.172 [2024-12-09 05:20:45.977771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:121544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.172 [2024-12-09 05:20:45.977780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:26:06.172 [2024-12-09 05:20:45.977794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:121576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.172 [2024-12-09 05:20:45.977810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:26:06.172 [2024-12-09 05:20:45.977824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:121608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.172 [2024-12-09 05:20:45.977834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:26:06.172 [2024-12-09 05:20:45.977848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:121632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.172 [2024-12-09 05:20:45.977858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:26:06.172 [2024-12-09 05:20:45.977874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:122432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.172 [2024-12-09 05:20:45.977883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:26:06.172 [2024-12-09 05:20:45.977898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:122520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.172 [2024-12-09 05:20:45.977907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:26:06.172 [2024-12-09 05:20:45.977921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:122536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.172 [2024-12-09 05:20:45.977930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:26:06.172 [2024-12-09 05:20:45.977947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:122464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.172 [2024-12-09 05:20:45.977957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:26:06.172 [2024-12-09 05:20:45.977971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:122496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.172 [2024-12-09 05:20:45.977981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:26:06.172 [2024-12-09 05:20:45.977995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:122528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.172 [2024-12-09 05:20:45.978003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:26:06.172 [2024-12-09 05:20:45.978017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:122552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.172 [2024-12-09 05:20:45.978026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 01:26:06.172 [2024-12-09 05:20:45.978039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:122568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.172 [2024-12-09 05:20:45.978049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:26:06.172 [2024-12-09 05:20:45.978064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:122584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.172 [2024-12-09 05:20:45.978073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:26:06.172 [2024-12-09 05:20:45.979114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:122600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.172 [2024-12-09 05:20:45.979148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:26:06.172 [2024-12-09 05:20:45.979167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:122616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.172 [2024-12-09 05:20:45.979176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:26:06.172 [2024-12-09 05:20:45.979190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:122632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.172 [2024-12-09 05:20:45.979199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:26:06.172 [2024-12-09 05:20:45.979214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:122648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.172 [2024-12-09 05:20:45.979222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:26:06.172 [2024-12-09 05:20:45.979238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:121688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.172 [2024-12-09 05:20:45.979246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:26:06.172 [2024-12-09 05:20:45.979260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:121720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.172 [2024-12-09 05:20:45.979269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:26:06.172 [2024-12-09 05:20:45.979283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:122544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.172 [2024-12-09 05:20:45.979292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:26:06.172 [2024-12-09 05:20:45.979307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:122576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.172 [2024-12-09 05:20:45.979317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:26:06.172 [2024-12-09 05:20:45.979341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:122664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.172 [2024-12-09 05:20:45.979351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:26:06.172 [2024-12-09 05:20:45.979365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:121776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.172 [2024-12-09 05:20:45.979374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 01:26:06.172 [2024-12-09 05:20:45.979388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:122592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.172 [2024-12-09 05:20:45.979397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 01:26:06.172 [2024-12-09 05:20:45.979411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:122624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.172 [2024-12-09 05:20:45.979421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 01:26:06.172 [2024-12-09 05:20:45.979438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:122656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:26:06.172 [2024-12-09 05:20:45.979447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 01:26:06.172 [2024-12-09 05:20:45.979469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:122680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.172 [2024-12-09 05:20:45.979478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 01:26:06.172 [2024-12-09 05:20:45.979491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:122696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.172 [2024-12-09 05:20:45.979501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 01:26:06.172 [2024-12-09 05:20:45.979515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:122712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.172 [2024-12-09 05:20:45.979525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:26:06.172 [2024-12-09 05:20:45.979754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:122728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.172 [2024-12-09 05:20:45.979770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000a p:0 m:0 dnr:0 01:26:06.172 [2024-12-09 05:20:45.979786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:122744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.172 [2024-12-09 05:20:45.979795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000b p:0 m:0 dnr:0 01:26:06.172 [2024-12-09 05:20:45.979809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:122760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.172 [2024-12-09 05:20:45.979817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000c p:0 m:0 dnr:0 01:26:06.172 [2024-12-09 05:20:45.979831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:122776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:26:06.172 [2024-12-09 05:20:45.979841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:26:06.172 8178.70 IOPS, 31.95 MiB/s [2024-12-09T05:20:48.628Z] 8264.32 IOPS, 32.28 MiB/s [2024-12-09T05:20:48.628Z] Received shutdown signal, test time was about 28.462616 seconds 01:26:06.172 01:26:06.172 Latency(us) 01:26:06.172 [2024-12-09T05:20:48.628Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:26:06.172 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 01:26:06.172 Verification LBA range: start 0x0 length 0x4000 01:26:06.172 Nvme0n1 : 28.46 8290.67 32.39 0.00 0.00 15417.12 116.26 3018433.62 01:26:06.172 [2024-12-09T05:20:48.628Z] =================================================================================================================== 01:26:06.172 [2024-12-09T05:20:48.628Z] Total : 8290.67 32.39 0.00 0.00 15417.12 116.26 3018433.62 01:26:06.172 05:20:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:26:06.431 05:20:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 01:26:06.431 05:20:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 01:26:06.431 05:20:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 01:26:06.431 05:20:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 01:26:06.431 05:20:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 01:26:06.431 05:20:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:26:06.431 05:20:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 01:26:06.431 05:20:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 01:26:06.431 05:20:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:26:06.431 rmmod nvme_tcp 01:26:06.431 rmmod nvme_fabrics 01:26:06.431 rmmod nvme_keyring 01:26:06.431 05:20:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:26:06.431 05:20:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 01:26:06.431 05:20:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 01:26:06.431 05:20:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 76322 ']' 01:26:06.431 05:20:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 76322 01:26:06.431 05:20:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 76322 ']' 01:26:06.431 05:20:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 76322 01:26:06.431 05:20:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 01:26:06.431 05:20:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:26:06.431 05:20:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76322 01:26:06.431 05:20:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:26:06.431 killing process with pid 76322 01:26:06.431 05:20:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:26:06.431 05:20:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76322' 01:26:06.431 05:20:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 76322 01:26:06.431 05:20:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 76322 01:26:06.998 05:20:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:26:06.998 05:20:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:26:06.998 05:20:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:26:06.998 05:20:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 01:26:06.998 05:20:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:26:06.998 05:20:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 01:26:06.998 05:20:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 01:26:06.998 05:20:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:26:06.998 05:20:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:26:06.998 05:20:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:26:06.998 05:20:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:26:06.998 05:20:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:26:06.998 05:20:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:26:06.998 05:20:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:26:06.998 05:20:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:26:06.998 05:20:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:26:06.998 05:20:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:26:06.998 05:20:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:26:06.998 05:20:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:26:06.998 05:20:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:26:06.998 05:20:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:26:06.998 05:20:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:26:07.257 05:20:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 01:26:07.257 05:20:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:26:07.257 05:20:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:26:07.257 05:20:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:26:07.257 05:20:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 01:26:07.257 01:26:07.257 real 0m34.174s 01:26:07.257 user 1m47.063s 01:26:07.257 sys 0m9.772s 01:26:07.257 05:20:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 01:26:07.257 05:20:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 01:26:07.257 ************************************ 01:26:07.257 END TEST nvmf_host_multipath_status 01:26:07.257 ************************************ 01:26:07.257 05:20:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 01:26:07.257 05:20:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:26:07.257 05:20:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 01:26:07.257 05:20:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 01:26:07.257 ************************************ 01:26:07.257 START TEST nvmf_discovery_remove_ifc 01:26:07.257 ************************************ 01:26:07.257 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 01:26:07.257 * Looking for test storage... 01:26:07.257 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:26:07.257 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:26:07.257 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 01:26:07.516 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:26:07.516 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:26:07.516 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:26:07.516 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 01:26:07.516 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 01:26:07.516 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 01:26:07.516 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 01:26:07.516 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 01:26:07.516 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 01:26:07.517 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 01:26:07.517 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 01:26:07.517 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 01:26:07.517 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:26:07.517 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 01:26:07.517 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 01:26:07.517 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 01:26:07.517 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:26:07.517 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 01:26:07.517 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 01:26:07.517 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:26:07.517 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 01:26:07.517 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 01:26:07.517 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 01:26:07.517 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 01:26:07.517 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:26:07.517 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 01:26:07.517 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 01:26:07.517 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:26:07.517 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:26:07.517 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 01:26:07.517 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:26:07.517 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:26:07.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:26:07.517 --rc genhtml_branch_coverage=1 01:26:07.517 --rc genhtml_function_coverage=1 01:26:07.517 --rc genhtml_legend=1 01:26:07.517 --rc geninfo_all_blocks=1 01:26:07.517 --rc geninfo_unexecuted_blocks=1 01:26:07.517 01:26:07.517 ' 01:26:07.517 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:26:07.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:26:07.517 --rc genhtml_branch_coverage=1 01:26:07.517 --rc genhtml_function_coverage=1 01:26:07.517 --rc genhtml_legend=1 01:26:07.517 --rc geninfo_all_blocks=1 01:26:07.517 --rc geninfo_unexecuted_blocks=1 01:26:07.517 01:26:07.517 ' 01:26:07.517 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:26:07.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:26:07.517 --rc genhtml_branch_coverage=1 01:26:07.517 --rc genhtml_function_coverage=1 01:26:07.517 --rc genhtml_legend=1 01:26:07.517 --rc geninfo_all_blocks=1 01:26:07.517 --rc geninfo_unexecuted_blocks=1 01:26:07.517 01:26:07.517 ' 01:26:07.517 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:26:07.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:26:07.517 --rc genhtml_branch_coverage=1 01:26:07.517 --rc genhtml_function_coverage=1 01:26:07.517 --rc genhtml_legend=1 01:26:07.517 --rc geninfo_all_blocks=1 01:26:07.517 --rc geninfo_unexecuted_blocks=1 01:26:07.517 01:26:07.517 ' 01:26:07.517 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:26:07.517 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 01:26:07.517 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:26:07.517 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:26:07.517 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:26:07.517 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:26:07.517 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:26:07.517 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:26:07.517 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:26:07.517 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:26:07.517 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:26:07.517 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:26:07.517 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:26:07.517 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:26:07.517 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:26:07.517 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:26:07.517 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:26:07.517 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:26:07.517 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:26:07.517 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 01:26:07.517 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:26:07.517 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:26:07.517 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:26:07.517 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:26:07.517 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:26:07.517 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:26:07.517 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 01:26:07.517 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:26:07.517 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 01:26:07.517 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:26:07.517 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:26:07.517 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:26:07.517 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:26:07.517 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:26:07.517 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:26:07.517 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:26:07.517 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:26:07.517 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:26:07.517 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 01:26:07.517 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 01:26:07.517 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 01:26:07.517 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 01:26:07.517 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 01:26:07.517 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 01:26:07.517 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 01:26:07.517 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 01:26:07.517 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:26:07.518 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:26:07.518 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 01:26:07.518 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 01:26:07.518 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 01:26:07.518 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:26:07.518 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:26:07.518 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:26:07.518 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:26:07.518 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:26:07.518 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:26:07.518 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:26:07.518 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:26:07.518 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@460 -- # nvmf_veth_init 01:26:07.518 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:26:07.518 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:26:07.518 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:26:07.518 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:26:07.518 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:26:07.518 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:26:07.518 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:26:07.518 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:26:07.518 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:26:07.518 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:26:07.518 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:26:07.518 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:26:07.518 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:26:07.518 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:26:07.518 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:26:07.518 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:26:07.518 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:26:07.518 Cannot find device "nvmf_init_br" 01:26:07.518 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 01:26:07.518 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:26:07.518 Cannot find device "nvmf_init_br2" 01:26:07.518 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 01:26:07.518 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:26:07.518 Cannot find device "nvmf_tgt_br" 01:26:07.518 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 01:26:07.518 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:26:07.518 Cannot find device "nvmf_tgt_br2" 01:26:07.518 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 01:26:07.518 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:26:07.518 Cannot find device "nvmf_init_br" 01:26:07.518 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 01:26:07.518 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:26:07.518 Cannot find device "nvmf_init_br2" 01:26:07.518 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 01:26:07.518 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:26:07.518 Cannot find device "nvmf_tgt_br" 01:26:07.518 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 01:26:07.518 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:26:07.777 Cannot find device "nvmf_tgt_br2" 01:26:07.777 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 01:26:07.777 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:26:07.777 Cannot find device "nvmf_br" 01:26:07.777 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 01:26:07.777 05:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:26:07.777 Cannot find device "nvmf_init_if" 01:26:07.777 05:20:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 01:26:07.777 05:20:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:26:07.777 Cannot find device "nvmf_init_if2" 01:26:07.777 05:20:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 01:26:07.777 05:20:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:26:07.777 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:26:07.777 05:20:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 01:26:07.777 05:20:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:26:07.777 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:26:07.777 05:20:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 01:26:07.777 05:20:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:26:07.777 05:20:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:26:07.777 05:20:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:26:07.777 05:20:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:26:07.777 05:20:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:26:07.777 05:20:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:26:07.777 05:20:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:26:07.777 05:20:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:26:07.777 05:20:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:26:07.777 05:20:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:26:07.777 05:20:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:26:07.777 05:20:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:26:07.777 05:20:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:26:07.777 05:20:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:26:07.777 05:20:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:26:07.777 05:20:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:26:07.777 05:20:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:26:07.777 05:20:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:26:07.777 05:20:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:26:07.777 05:20:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:26:07.777 05:20:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:26:07.777 05:20:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:26:07.777 05:20:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:26:07.777 05:20:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:26:07.778 05:20:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:26:08.037 05:20:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:26:08.037 05:20:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:26:08.037 05:20:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:26:08.037 05:20:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:26:08.037 05:20:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:26:08.037 05:20:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:26:08.037 05:20:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:26:08.037 05:20:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:26:08.037 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:26:08.037 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.110 ms 01:26:08.037 01:26:08.037 --- 10.0.0.3 ping statistics --- 01:26:08.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:26:08.037 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 01:26:08.037 05:20:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:26:08.037 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:26:08.037 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.070 ms 01:26:08.037 01:26:08.037 --- 10.0.0.4 ping statistics --- 01:26:08.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:26:08.037 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 01:26:08.037 05:20:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:26:08.037 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:26:08.037 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.054 ms 01:26:08.037 01:26:08.037 --- 10.0.0.1 ping statistics --- 01:26:08.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:26:08.037 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 01:26:08.037 05:20:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:26:08.037 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:26:08.037 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.111 ms 01:26:08.037 01:26:08.037 --- 10.0.0.2 ping statistics --- 01:26:08.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:26:08.037 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 01:26:08.037 05:20:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:26:08.037 05:20:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@461 -- # return 0 01:26:08.037 05:20:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:26:08.037 05:20:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:26:08.037 05:20:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:26:08.037 05:20:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:26:08.037 05:20:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:26:08.037 05:20:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:26:08.037 05:20:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:26:08.037 05:20:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 01:26:08.037 05:20:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:26:08.037 05:20:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 01:26:08.037 05:20:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:26:08.037 05:20:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=77159 01:26:08.037 05:20:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 01:26:08.037 05:20:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 77159 01:26:08.037 05:20:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 77159 ']' 01:26:08.037 05:20:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:26:08.037 05:20:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 01:26:08.037 05:20:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:26:08.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:26:08.037 05:20:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 01:26:08.037 05:20:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:26:08.037 [2024-12-09 05:20:50.404857] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:26:08.037 [2024-12-09 05:20:50.404965] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:26:08.297 [2024-12-09 05:20:50.558438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:26:08.297 [2024-12-09 05:20:50.600831] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:26:08.297 [2024-12-09 05:20:50.600880] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:26:08.297 [2024-12-09 05:20:50.600886] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:26:08.297 [2024-12-09 05:20:50.600890] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:26:08.297 [2024-12-09 05:20:50.600894] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:26:08.297 [2024-12-09 05:20:50.601155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:26:08.297 [2024-12-09 05:20:50.641362] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:26:08.863 05:20:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:26:08.863 05:20:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 01:26:08.863 05:20:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:26:08.863 05:20:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 01:26:08.863 05:20:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:26:08.864 05:20:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:26:08.864 05:20:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 01:26:08.864 05:20:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:08.864 05:20:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:26:09.122 [2024-12-09 05:20:51.322519] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:26:09.122 [2024-12-09 05:20:51.330604] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 01:26:09.122 null0 01:26:09.122 [2024-12-09 05:20:51.362476] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:26:09.122 05:20:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:09.122 05:20:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=77191 01:26:09.122 05:20:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 01:26:09.122 05:20:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 77191 /tmp/host.sock 01:26:09.122 05:20:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 77191 ']' 01:26:09.122 05:20:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 01:26:09.122 05:20:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 01:26:09.122 05:20:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 01:26:09.122 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 01:26:09.122 05:20:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 01:26:09.122 05:20:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:26:09.123 [2024-12-09 05:20:51.435310] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:26:09.123 [2024-12-09 05:20:51.435468] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77191 ] 01:26:09.123 [2024-12-09 05:20:51.570348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:26:09.381 [2024-12-09 05:20:51.620366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:26:09.949 05:20:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:26:09.949 05:20:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 01:26:09.949 05:20:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:26:09.949 05:20:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 01:26:09.949 05:20:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:09.949 05:20:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:26:09.949 05:20:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:09.949 05:20:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 01:26:09.949 05:20:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:09.949 05:20:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:26:09.949 [2024-12-09 05:20:52.388043] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:26:10.208 05:20:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:10.208 05:20:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 01:26:10.208 05:20:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:10.208 05:20:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:26:11.145 [2024-12-09 05:20:53.431058] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 01:26:11.145 [2024-12-09 05:20:53.431140] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 01:26:11.145 [2024-12-09 05:20:53.431187] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 01:26:11.145 [2024-12-09 05:20:53.437081] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 01:26:11.145 [2024-12-09 05:20:53.491380] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 01:26:11.145 [2024-12-09 05:20:53.492294] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1814000:1 started. 01:26:11.145 [2024-12-09 05:20:53.493839] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 01:26:11.145 [2024-12-09 05:20:53.493920] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 01:26:11.145 [2024-12-09 05:20:53.493979] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 01:26:11.145 [2024-12-09 05:20:53.494026] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 01:26:11.145 [2024-12-09 05:20:53.494093] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 01:26:11.145 05:20:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:11.145 05:20:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 01:26:11.145 05:20:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:26:11.145 [2024-12-09 05:20:53.499860] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1814000 was disconnected and freed. delete nvme_qpair. 01:26:11.145 05:20:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:26:11.145 05:20:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:26:11.145 05:20:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:26:11.145 05:20:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:11.145 05:20:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:26:11.145 05:20:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:26:11.145 05:20:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:11.145 05:20:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 01:26:11.145 05:20:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 01:26:11.145 05:20:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 01:26:11.145 05:20:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 01:26:11.145 05:20:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:26:11.145 05:20:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:26:11.145 05:20:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:11.145 05:20:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:26:11.145 05:20:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:26:11.145 05:20:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:26:11.145 05:20:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:26:11.145 05:20:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:11.403 05:20:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 01:26:11.403 05:20:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 01:26:12.339 05:20:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:26:12.339 05:20:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:26:12.339 05:20:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:26:12.339 05:20:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:26:12.339 05:20:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:26:12.339 05:20:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:12.339 05:20:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:26:12.339 05:20:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:12.339 05:20:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 01:26:12.339 05:20:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 01:26:13.274 05:20:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:26:13.274 05:20:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:26:13.274 05:20:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:26:13.274 05:20:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:26:13.274 05:20:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:26:13.274 05:20:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:13.274 05:20:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:26:13.274 05:20:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:13.274 05:20:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 01:26:13.274 05:20:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 01:26:14.651 05:20:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:26:14.651 05:20:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:26:14.651 05:20:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:26:14.651 05:20:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:26:14.652 05:20:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:14.652 05:20:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:26:14.652 05:20:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:26:14.652 05:20:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:14.652 05:20:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 01:26:14.652 05:20:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 01:26:15.590 05:20:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:26:15.590 05:20:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:26:15.590 05:20:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:15.590 05:20:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:26:15.590 05:20:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:26:15.590 05:20:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:26:15.590 05:20:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:26:15.590 05:20:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:15.590 05:20:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 01:26:15.590 05:20:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 01:26:16.525 05:20:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:26:16.525 05:20:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:26:16.525 05:20:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:26:16.525 05:20:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:26:16.525 05:20:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:16.525 05:20:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:26:16.525 05:20:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:26:16.525 05:20:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:16.525 05:20:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 01:26:16.525 05:20:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 01:26:16.525 [2024-12-09 05:20:58.911352] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 01:26:16.525 [2024-12-09 05:20:58.911406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:26:16.525 [2024-12-09 05:20:58.911416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:26:16.525 [2024-12-09 05:20:58.911425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:26:16.525 [2024-12-09 05:20:58.911431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:26:16.525 [2024-12-09 05:20:58.911438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:26:16.525 [2024-12-09 05:20:58.911444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:26:16.525 [2024-12-09 05:20:58.911450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:26:16.525 [2024-12-09 05:20:58.911456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:26:16.525 [2024-12-09 05:20:58.911462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 01:26:16.525 [2024-12-09 05:20:58.911467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:26:16.525 [2024-12-09 05:20:58.911473] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f0250 is same with the state(6) to be set 01:26:16.525 [2024-12-09 05:20:58.921328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f0250 (9): Bad file descriptor 01:26:16.525 [2024-12-09 05:20:58.931321] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 01:26:16.525 [2024-12-09 05:20:58.931360] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 01:26:16.525 [2024-12-09 05:20:58.931366] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 01:26:16.525 [2024-12-09 05:20:58.931370] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 01:26:16.525 [2024-12-09 05:20:58.931400] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 01:26:17.460 05:20:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:26:17.460 05:20:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:26:17.460 05:20:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:26:17.460 05:20:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:17.460 05:20:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:26:17.460 05:20:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:26:17.460 05:20:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:26:17.719 [2024-12-09 05:20:59.988428] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 110 01:26:17.719 [2024-12-09 05:20:59.988699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f0250 with addr=10.0.0.3, port=4420 01:26:17.719 [2024-12-09 05:20:59.988794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f0250 is same with the state(6) to be set 01:26:17.719 [2024-12-09 05:20:59.988874] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f0250 (9): Bad file descriptor 01:26:17.719 [2024-12-09 05:20:59.990165] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 01:26:17.719 [2024-12-09 05:20:59.990275] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 01:26:17.719 [2024-12-09 05:20:59.990300] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 01:26:17.719 [2024-12-09 05:20:59.990362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 01:26:17.719 [2024-12-09 05:20:59.990388] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 01:26:17.719 [2024-12-09 05:20:59.990404] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 01:26:17.719 [2024-12-09 05:20:59.990417] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 01:26:17.719 [2024-12-09 05:20:59.990441] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 01:26:17.719 [2024-12-09 05:20:59.990455] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 01:26:17.719 05:21:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:17.719 05:21:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 01:26:17.719 05:21:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 01:26:18.656 [2024-12-09 05:21:00.988689] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 01:26:18.656 [2024-12-09 05:21:00.988716] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 01:26:18.656 [2024-12-09 05:21:00.988736] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 01:26:18.656 [2024-12-09 05:21:00.988742] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 01:26:18.656 [2024-12-09 05:21:00.988749] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 01:26:18.656 [2024-12-09 05:21:00.988754] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 01:26:18.656 [2024-12-09 05:21:00.988758] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 01:26:18.656 [2024-12-09 05:21:00.988761] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 01:26:18.656 [2024-12-09 05:21:00.988788] bdev_nvme.c:7235:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 01:26:18.656 [2024-12-09 05:21:00.988819] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:26:18.656 [2024-12-09 05:21:00.988827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:26:18.656 [2024-12-09 05:21:00.988836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:26:18.656 [2024-12-09 05:21:00.988842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:26:18.656 [2024-12-09 05:21:00.988848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:26:18.656 [2024-12-09 05:21:00.988854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:26:18.656 [2024-12-09 05:21:00.988860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:26:18.656 [2024-12-09 05:21:00.988865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:26:18.656 [2024-12-09 05:21:00.988871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 01:26:18.656 [2024-12-09 05:21:00.988875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:26:18.656 [2024-12-09 05:21:00.988881] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 01:26:18.656 [2024-12-09 05:21:00.989344] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x177ba20 (9): Bad file descriptor 01:26:18.656 [2024-12-09 05:21:00.990355] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 01:26:18.656 [2024-12-09 05:21:00.990371] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 01:26:18.656 05:21:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:26:18.656 05:21:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:26:18.656 05:21:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:26:18.656 05:21:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:18.656 05:21:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:26:18.656 05:21:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:26:18.656 05:21:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:26:18.656 05:21:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:18.656 05:21:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 01:26:18.656 05:21:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:26:18.656 05:21:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:26:18.656 05:21:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 01:26:18.656 05:21:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:26:18.656 05:21:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:26:18.656 05:21:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:18.656 05:21:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:26:18.656 05:21:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:26:18.656 05:21:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:26:18.656 05:21:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:26:18.656 05:21:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:18.916 05:21:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 01:26:18.916 05:21:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 01:26:19.873 05:21:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:26:19.873 05:21:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:26:19.873 05:21:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:26:19.873 05:21:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:26:19.873 05:21:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:26:19.873 05:21:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:19.873 05:21:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:26:19.873 05:21:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:19.873 05:21:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 01:26:19.873 05:21:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 01:26:20.811 [2024-12-09 05:21:02.997262] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 01:26:20.811 [2024-12-09 05:21:02.997285] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 01:26:20.811 [2024-12-09 05:21:02.997298] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 01:26:20.811 [2024-12-09 05:21:03.003281] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 01:26:20.811 [2024-12-09 05:21:03.057445] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4420 01:26:20.811 [2024-12-09 05:21:03.058140] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x17fbd80:1 started. 01:26:20.811 [2024-12-09 05:21:03.059198] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 01:26:20.811 [2024-12-09 05:21:03.059279] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 01:26:20.811 [2024-12-09 05:21:03.059321] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 01:26:20.811 [2024-12-09 05:21:03.059370] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 01:26:20.811 [2024-12-09 05:21:03.059411] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 01:26:20.811 [2024-12-09 05:21:03.065967] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x17fbd80 was disconnected and freed. delete nvme_qpair. 01:26:20.812 05:21:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:26:20.812 05:21:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:26:20.812 05:21:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:20.812 05:21:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:26:20.812 05:21:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:26:20.812 05:21:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:26:20.812 05:21:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:26:20.812 05:21:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:20.812 05:21:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 01:26:20.812 05:21:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 01:26:20.812 05:21:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 77191 01:26:20.812 05:21:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 77191 ']' 01:26:20.812 05:21:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 77191 01:26:20.812 05:21:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 01:26:20.812 05:21:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:26:20.812 05:21:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77191 01:26:21.071 05:21:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:26:21.071 killing process with pid 77191 01:26:21.071 05:21:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:26:21.071 05:21:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77191' 01:26:21.071 05:21:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 77191 01:26:21.071 05:21:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 77191 01:26:21.071 05:21:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 01:26:21.071 05:21:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 01:26:21.071 05:21:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 01:26:21.330 05:21:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:26:21.330 05:21:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 01:26:21.330 05:21:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 01:26:21.330 05:21:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:26:21.330 rmmod nvme_tcp 01:26:21.330 rmmod nvme_fabrics 01:26:21.330 rmmod nvme_keyring 01:26:21.330 05:21:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:26:21.330 05:21:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 01:26:21.330 05:21:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 01:26:21.330 05:21:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 77159 ']' 01:26:21.330 05:21:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 77159 01:26:21.330 05:21:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 77159 ']' 01:26:21.330 05:21:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 77159 01:26:21.330 05:21:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 01:26:21.330 05:21:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:26:21.330 05:21:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77159 01:26:21.330 killing process with pid 77159 01:26:21.330 05:21:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:26:21.330 05:21:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:26:21.330 05:21:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77159' 01:26:21.330 05:21:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 77159 01:26:21.330 05:21:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 77159 01:26:21.590 05:21:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:26:21.590 05:21:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:26:21.590 05:21:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:26:21.590 05:21:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 01:26:21.590 05:21:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 01:26:21.590 05:21:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:26:21.590 05:21:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 01:26:21.590 05:21:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:26:21.590 05:21:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:26:21.590 05:21:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:26:21.590 05:21:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:26:21.590 05:21:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:26:21.590 05:21:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:26:21.590 05:21:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:26:21.590 05:21:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:26:21.590 05:21:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:26:21.590 05:21:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:26:21.590 05:21:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:26:21.590 05:21:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:26:21.848 05:21:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:26:21.848 05:21:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:26:21.848 05:21:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:26:21.848 05:21:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 01:26:21.848 05:21:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:26:21.848 05:21:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:26:21.848 05:21:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:26:21.848 05:21:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 01:26:21.848 01:26:21.848 real 0m14.585s 01:26:21.848 user 0m24.361s 01:26:21.848 sys 0m2.666s 01:26:21.848 05:21:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 01:26:21.848 ************************************ 01:26:21.848 END TEST nvmf_discovery_remove_ifc 01:26:21.848 ************************************ 01:26:21.848 05:21:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:26:21.848 05:21:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 01:26:21.848 05:21:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:26:21.848 05:21:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 01:26:21.848 05:21:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 01:26:21.848 ************************************ 01:26:21.848 START TEST nvmf_identify_kernel_target 01:26:21.848 ************************************ 01:26:21.848 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 01:26:22.107 * Looking for test storage... 01:26:22.107 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:26:22.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:26:22.107 --rc genhtml_branch_coverage=1 01:26:22.107 --rc genhtml_function_coverage=1 01:26:22.107 --rc genhtml_legend=1 01:26:22.107 --rc geninfo_all_blocks=1 01:26:22.107 --rc geninfo_unexecuted_blocks=1 01:26:22.107 01:26:22.107 ' 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:26:22.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:26:22.107 --rc genhtml_branch_coverage=1 01:26:22.107 --rc genhtml_function_coverage=1 01:26:22.107 --rc genhtml_legend=1 01:26:22.107 --rc geninfo_all_blocks=1 01:26:22.107 --rc geninfo_unexecuted_blocks=1 01:26:22.107 01:26:22.107 ' 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:26:22.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:26:22.107 --rc genhtml_branch_coverage=1 01:26:22.107 --rc genhtml_function_coverage=1 01:26:22.107 --rc genhtml_legend=1 01:26:22.107 --rc geninfo_all_blocks=1 01:26:22.107 --rc geninfo_unexecuted_blocks=1 01:26:22.107 01:26:22.107 ' 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:26:22.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:26:22.107 --rc genhtml_branch_coverage=1 01:26:22.107 --rc genhtml_function_coverage=1 01:26:22.107 --rc genhtml_legend=1 01:26:22.107 --rc geninfo_all_blocks=1 01:26:22.107 --rc geninfo_unexecuted_blocks=1 01:26:22.107 01:26:22.107 ' 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:26:22.107 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # nvmf_veth_init 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:26:22.107 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:26:22.108 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:26:22.108 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:26:22.108 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:26:22.108 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:26:22.108 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:26:22.108 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:26:22.108 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:26:22.108 Cannot find device "nvmf_init_br" 01:26:22.108 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 01:26:22.108 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:26:22.108 Cannot find device "nvmf_init_br2" 01:26:22.108 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 01:26:22.108 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:26:22.367 Cannot find device "nvmf_tgt_br" 01:26:22.367 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 01:26:22.367 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:26:22.367 Cannot find device "nvmf_tgt_br2" 01:26:22.367 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 01:26:22.367 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:26:22.367 Cannot find device "nvmf_init_br" 01:26:22.367 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 01:26:22.367 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:26:22.367 Cannot find device "nvmf_init_br2" 01:26:22.367 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 01:26:22.367 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:26:22.367 Cannot find device "nvmf_tgt_br" 01:26:22.367 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 01:26:22.367 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:26:22.367 Cannot find device "nvmf_tgt_br2" 01:26:22.367 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 01:26:22.367 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:26:22.367 Cannot find device "nvmf_br" 01:26:22.367 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 01:26:22.367 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:26:22.367 Cannot find device "nvmf_init_if" 01:26:22.367 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 01:26:22.367 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:26:22.367 Cannot find device "nvmf_init_if2" 01:26:22.367 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 01:26:22.367 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:26:22.367 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:26:22.367 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 01:26:22.367 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:26:22.367 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:26:22.367 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 01:26:22.367 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:26:22.367 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:26:22.367 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:26:22.367 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:26:22.367 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:26:22.367 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:26:22.367 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:26:22.367 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:26:22.367 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:26:22.367 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:26:22.367 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:26:22.367 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:26:22.367 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:26:22.367 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:26:22.367 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:26:22.367 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:26:22.367 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:26:22.367 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:26:22.367 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:26:22.626 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:26:22.626 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:26:22.626 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:26:22.626 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:26:22.626 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:26:22.626 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:26:22.626 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:26:22.626 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:26:22.626 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:26:22.626 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:26:22.626 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:26:22.626 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:26:22.626 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:26:22.626 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:26:22.626 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:26:22.626 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.117 ms 01:26:22.626 01:26:22.626 --- 10.0.0.3 ping statistics --- 01:26:22.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:26:22.626 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 01:26:22.626 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:26:22.626 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:26:22.626 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.090 ms 01:26:22.626 01:26:22.626 --- 10.0.0.4 ping statistics --- 01:26:22.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:26:22.626 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 01:26:22.626 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:26:22.626 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:26:22.626 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 01:26:22.626 01:26:22.626 --- 10.0.0.1 ping statistics --- 01:26:22.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:26:22.626 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 01:26:22.626 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:26:22.626 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:26:22.626 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 01:26:22.626 01:26:22.626 --- 10.0.0.2 ping statistics --- 01:26:22.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:26:22.626 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 01:26:22.626 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:26:22.626 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # return 0 01:26:22.626 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:26:22.626 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:26:22.626 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:26:22.626 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:26:22.626 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:26:22.626 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:26:22.626 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:26:22.626 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 01:26:22.626 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 01:26:22.626 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 01:26:22.626 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:22.626 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:22.626 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:22.626 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:22.626 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:22.626 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:22.626 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:22.626 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:22.626 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:22.626 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 01:26:22.626 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 01:26:22.626 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 01:26:22.626 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 01:26:22.626 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 01:26:22.626 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 01:26:22.626 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 01:26:22.626 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 01:26:22.626 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 01:26:22.626 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 01:26:22.626 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 01:26:22.626 05:21:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 01:26:23.191 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:26:23.191 Waiting for block devices as requested 01:26:23.192 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 01:26:23.450 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 01:26:23.450 05:21:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 01:26:23.450 05:21:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 01:26:23.450 05:21:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 01:26:23.450 05:21:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 01:26:23.450 05:21:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 01:26:23.450 05:21:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:26:23.450 05:21:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 01:26:23.450 05:21:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 01:26:23.450 05:21:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 01:26:23.450 No valid GPT data, bailing 01:26:23.450 05:21:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 01:26:23.450 05:21:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 01:26:23.450 05:21:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 01:26:23.450 05:21:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 01:26:23.450 05:21:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 01:26:23.450 05:21:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 01:26:23.450 05:21:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 01:26:23.450 05:21:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n2 01:26:23.450 05:21:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 01:26:23.450 05:21:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:26:23.450 05:21:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n2 01:26:23.450 05:21:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 01:26:23.450 05:21:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 01:26:23.450 No valid GPT data, bailing 01:26:23.450 05:21:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 01:26:23.451 05:21:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 01:26:23.451 05:21:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 01:26:23.451 05:21:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 01:26:23.451 05:21:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 01:26:23.451 05:21:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 01:26:23.451 05:21:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 01:26:23.451 05:21:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n3 01:26:23.451 05:21:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 01:26:23.451 05:21:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:26:23.451 05:21:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n3 01:26:23.451 05:21:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 01:26:23.451 05:21:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 01:26:23.451 No valid GPT data, bailing 01:26:23.709 05:21:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 01:26:23.709 05:21:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 01:26:23.709 05:21:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 01:26:23.709 05:21:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 01:26:23.709 05:21:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 01:26:23.709 05:21:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 01:26:23.709 05:21:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 01:26:23.709 05:21:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme1n1 01:26:23.709 05:21:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 01:26:23.709 05:21:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:26:23.709 05:21:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 01:26:23.709 05:21:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 01:26:23.709 05:21:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 01:26:23.709 No valid GPT data, bailing 01:26:23.709 05:21:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 01:26:23.709 05:21:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 01:26:23.709 05:21:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 01:26:23.709 05:21:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 01:26:23.709 05:21:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 01:26:23.709 05:21:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 01:26:23.709 05:21:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 01:26:23.709 05:21:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 01:26:23.709 05:21:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 01:26:23.709 05:21:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 01:26:23.709 05:21:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 01:26:23.709 05:21:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 01:26:23.709 05:21:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 01:26:23.709 05:21:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 01:26:23.709 05:21:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 01:26:23.709 05:21:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 01:26:23.709 05:21:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 01:26:23.709 05:21:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid=0567fff2-ddaf-4a4f-877d-a2600d7e662b -a 10.0.0.1 -t tcp -s 4420 01:26:23.709 01:26:23.709 Discovery Log Number of Records 2, Generation counter 2 01:26:23.709 =====Discovery Log Entry 0====== 01:26:23.709 trtype: tcp 01:26:23.709 adrfam: ipv4 01:26:23.709 subtype: current discovery subsystem 01:26:23.709 treq: not specified, sq flow control disable supported 01:26:23.709 portid: 1 01:26:23.709 trsvcid: 4420 01:26:23.709 subnqn: nqn.2014-08.org.nvmexpress.discovery 01:26:23.709 traddr: 10.0.0.1 01:26:23.709 eflags: none 01:26:23.709 sectype: none 01:26:23.709 =====Discovery Log Entry 1====== 01:26:23.709 trtype: tcp 01:26:23.709 adrfam: ipv4 01:26:23.709 subtype: nvme subsystem 01:26:23.709 treq: not specified, sq flow control disable supported 01:26:23.709 portid: 1 01:26:23.709 trsvcid: 4420 01:26:23.709 subnqn: nqn.2016-06.io.spdk:testnqn 01:26:23.709 traddr: 10.0.0.1 01:26:23.709 eflags: none 01:26:23.709 sectype: none 01:26:23.709 05:21:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 01:26:23.709 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 01:26:23.968 ===================================================== 01:26:23.968 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 01:26:23.968 ===================================================== 01:26:23.968 Controller Capabilities/Features 01:26:23.968 ================================ 01:26:23.968 Vendor ID: 0000 01:26:23.968 Subsystem Vendor ID: 0000 01:26:23.968 Serial Number: 84e4565226b055686776 01:26:23.968 Model Number: Linux 01:26:23.968 Firmware Version: 6.8.9-20 01:26:23.968 Recommended Arb Burst: 0 01:26:23.968 IEEE OUI Identifier: 00 00 00 01:26:23.968 Multi-path I/O 01:26:23.968 May have multiple subsystem ports: No 01:26:23.968 May have multiple controllers: No 01:26:23.968 Associated with SR-IOV VF: No 01:26:23.968 Max Data Transfer Size: Unlimited 01:26:23.968 Max Number of Namespaces: 0 01:26:23.968 Max Number of I/O Queues: 1024 01:26:23.968 NVMe Specification Version (VS): 1.3 01:26:23.968 NVMe Specification Version (Identify): 1.3 01:26:23.968 Maximum Queue Entries: 1024 01:26:23.968 Contiguous Queues Required: No 01:26:23.968 Arbitration Mechanisms Supported 01:26:23.968 Weighted Round Robin: Not Supported 01:26:23.968 Vendor Specific: Not Supported 01:26:23.968 Reset Timeout: 7500 ms 01:26:23.968 Doorbell Stride: 4 bytes 01:26:23.968 NVM Subsystem Reset: Not Supported 01:26:23.968 Command Sets Supported 01:26:23.968 NVM Command Set: Supported 01:26:23.968 Boot Partition: Not Supported 01:26:23.968 Memory Page Size Minimum: 4096 bytes 01:26:23.968 Memory Page Size Maximum: 4096 bytes 01:26:23.968 Persistent Memory Region: Not Supported 01:26:23.968 Optional Asynchronous Events Supported 01:26:23.968 Namespace Attribute Notices: Not Supported 01:26:23.968 Firmware Activation Notices: Not Supported 01:26:23.968 ANA Change Notices: Not Supported 01:26:23.968 PLE Aggregate Log Change Notices: Not Supported 01:26:23.968 LBA Status Info Alert Notices: Not Supported 01:26:23.968 EGE Aggregate Log Change Notices: Not Supported 01:26:23.968 Normal NVM Subsystem Shutdown event: Not Supported 01:26:23.968 Zone Descriptor Change Notices: Not Supported 01:26:23.968 Discovery Log Change Notices: Supported 01:26:23.968 Controller Attributes 01:26:23.968 128-bit Host Identifier: Not Supported 01:26:23.968 Non-Operational Permissive Mode: Not Supported 01:26:23.968 NVM Sets: Not Supported 01:26:23.968 Read Recovery Levels: Not Supported 01:26:23.968 Endurance Groups: Not Supported 01:26:23.968 Predictable Latency Mode: Not Supported 01:26:23.968 Traffic Based Keep ALive: Not Supported 01:26:23.968 Namespace Granularity: Not Supported 01:26:23.968 SQ Associations: Not Supported 01:26:23.968 UUID List: Not Supported 01:26:23.968 Multi-Domain Subsystem: Not Supported 01:26:23.968 Fixed Capacity Management: Not Supported 01:26:23.968 Variable Capacity Management: Not Supported 01:26:23.968 Delete Endurance Group: Not Supported 01:26:23.968 Delete NVM Set: Not Supported 01:26:23.968 Extended LBA Formats Supported: Not Supported 01:26:23.968 Flexible Data Placement Supported: Not Supported 01:26:23.968 01:26:23.968 Controller Memory Buffer Support 01:26:23.968 ================================ 01:26:23.968 Supported: No 01:26:23.968 01:26:23.968 Persistent Memory Region Support 01:26:23.968 ================================ 01:26:23.968 Supported: No 01:26:23.968 01:26:23.968 Admin Command Set Attributes 01:26:23.968 ============================ 01:26:23.968 Security Send/Receive: Not Supported 01:26:23.968 Format NVM: Not Supported 01:26:23.968 Firmware Activate/Download: Not Supported 01:26:23.968 Namespace Management: Not Supported 01:26:23.968 Device Self-Test: Not Supported 01:26:23.968 Directives: Not Supported 01:26:23.968 NVMe-MI: Not Supported 01:26:23.968 Virtualization Management: Not Supported 01:26:23.968 Doorbell Buffer Config: Not Supported 01:26:23.968 Get LBA Status Capability: Not Supported 01:26:23.968 Command & Feature Lockdown Capability: Not Supported 01:26:23.968 Abort Command Limit: 1 01:26:23.968 Async Event Request Limit: 1 01:26:23.968 Number of Firmware Slots: N/A 01:26:23.968 Firmware Slot 1 Read-Only: N/A 01:26:23.968 Firmware Activation Without Reset: N/A 01:26:23.968 Multiple Update Detection Support: N/A 01:26:23.968 Firmware Update Granularity: No Information Provided 01:26:23.968 Per-Namespace SMART Log: No 01:26:23.968 Asymmetric Namespace Access Log Page: Not Supported 01:26:23.968 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 01:26:23.968 Command Effects Log Page: Not Supported 01:26:23.968 Get Log Page Extended Data: Supported 01:26:23.968 Telemetry Log Pages: Not Supported 01:26:23.968 Persistent Event Log Pages: Not Supported 01:26:23.968 Supported Log Pages Log Page: May Support 01:26:23.968 Commands Supported & Effects Log Page: Not Supported 01:26:23.968 Feature Identifiers & Effects Log Page:May Support 01:26:23.968 NVMe-MI Commands & Effects Log Page: May Support 01:26:23.968 Data Area 4 for Telemetry Log: Not Supported 01:26:23.968 Error Log Page Entries Supported: 1 01:26:23.968 Keep Alive: Not Supported 01:26:23.968 01:26:23.968 NVM Command Set Attributes 01:26:23.968 ========================== 01:26:23.968 Submission Queue Entry Size 01:26:23.968 Max: 1 01:26:23.968 Min: 1 01:26:23.968 Completion Queue Entry Size 01:26:23.968 Max: 1 01:26:23.968 Min: 1 01:26:23.968 Number of Namespaces: 0 01:26:23.968 Compare Command: Not Supported 01:26:23.968 Write Uncorrectable Command: Not Supported 01:26:23.968 Dataset Management Command: Not Supported 01:26:23.968 Write Zeroes Command: Not Supported 01:26:23.968 Set Features Save Field: Not Supported 01:26:23.968 Reservations: Not Supported 01:26:23.968 Timestamp: Not Supported 01:26:23.968 Copy: Not Supported 01:26:23.968 Volatile Write Cache: Not Present 01:26:23.968 Atomic Write Unit (Normal): 1 01:26:23.968 Atomic Write Unit (PFail): 1 01:26:23.968 Atomic Compare & Write Unit: 1 01:26:23.968 Fused Compare & Write: Not Supported 01:26:23.968 Scatter-Gather List 01:26:23.968 SGL Command Set: Supported 01:26:23.968 SGL Keyed: Not Supported 01:26:23.968 SGL Bit Bucket Descriptor: Not Supported 01:26:23.968 SGL Metadata Pointer: Not Supported 01:26:23.968 Oversized SGL: Not Supported 01:26:23.968 SGL Metadata Address: Not Supported 01:26:23.968 SGL Offset: Supported 01:26:23.968 Transport SGL Data Block: Not Supported 01:26:23.968 Replay Protected Memory Block: Not Supported 01:26:23.968 01:26:23.968 Firmware Slot Information 01:26:23.968 ========================= 01:26:23.968 Active slot: 0 01:26:23.968 01:26:23.968 01:26:23.968 Error Log 01:26:23.968 ========= 01:26:23.968 01:26:23.968 Active Namespaces 01:26:23.968 ================= 01:26:23.968 Discovery Log Page 01:26:23.968 ================== 01:26:23.968 Generation Counter: 2 01:26:23.968 Number of Records: 2 01:26:23.968 Record Format: 0 01:26:23.968 01:26:23.968 Discovery Log Entry 0 01:26:23.969 ---------------------- 01:26:23.969 Transport Type: 3 (TCP) 01:26:23.969 Address Family: 1 (IPv4) 01:26:23.969 Subsystem Type: 3 (Current Discovery Subsystem) 01:26:23.969 Entry Flags: 01:26:23.969 Duplicate Returned Information: 0 01:26:23.969 Explicit Persistent Connection Support for Discovery: 0 01:26:23.969 Transport Requirements: 01:26:23.969 Secure Channel: Not Specified 01:26:23.969 Port ID: 1 (0x0001) 01:26:23.969 Controller ID: 65535 (0xffff) 01:26:23.969 Admin Max SQ Size: 32 01:26:23.969 Transport Service Identifier: 4420 01:26:23.969 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 01:26:23.969 Transport Address: 10.0.0.1 01:26:23.969 Discovery Log Entry 1 01:26:23.969 ---------------------- 01:26:23.969 Transport Type: 3 (TCP) 01:26:23.969 Address Family: 1 (IPv4) 01:26:23.969 Subsystem Type: 2 (NVM Subsystem) 01:26:23.969 Entry Flags: 01:26:23.969 Duplicate Returned Information: 0 01:26:23.969 Explicit Persistent Connection Support for Discovery: 0 01:26:23.969 Transport Requirements: 01:26:23.969 Secure Channel: Not Specified 01:26:23.969 Port ID: 1 (0x0001) 01:26:23.969 Controller ID: 65535 (0xffff) 01:26:23.969 Admin Max SQ Size: 32 01:26:23.969 Transport Service Identifier: 4420 01:26:23.969 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 01:26:23.969 Transport Address: 10.0.0.1 01:26:23.969 05:21:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:26:24.229 get_feature(0x01) failed 01:26:24.229 get_feature(0x02) failed 01:26:24.229 get_feature(0x04) failed 01:26:24.229 ===================================================== 01:26:24.229 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 01:26:24.229 ===================================================== 01:26:24.229 Controller Capabilities/Features 01:26:24.229 ================================ 01:26:24.229 Vendor ID: 0000 01:26:24.229 Subsystem Vendor ID: 0000 01:26:24.229 Serial Number: ad0f495ce36107b2cc42 01:26:24.229 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 01:26:24.229 Firmware Version: 6.8.9-20 01:26:24.229 Recommended Arb Burst: 6 01:26:24.229 IEEE OUI Identifier: 00 00 00 01:26:24.229 Multi-path I/O 01:26:24.229 May have multiple subsystem ports: Yes 01:26:24.229 May have multiple controllers: Yes 01:26:24.229 Associated with SR-IOV VF: No 01:26:24.229 Max Data Transfer Size: Unlimited 01:26:24.229 Max Number of Namespaces: 1024 01:26:24.229 Max Number of I/O Queues: 128 01:26:24.229 NVMe Specification Version (VS): 1.3 01:26:24.229 NVMe Specification Version (Identify): 1.3 01:26:24.229 Maximum Queue Entries: 1024 01:26:24.229 Contiguous Queues Required: No 01:26:24.229 Arbitration Mechanisms Supported 01:26:24.229 Weighted Round Robin: Not Supported 01:26:24.229 Vendor Specific: Not Supported 01:26:24.229 Reset Timeout: 7500 ms 01:26:24.229 Doorbell Stride: 4 bytes 01:26:24.229 NVM Subsystem Reset: Not Supported 01:26:24.229 Command Sets Supported 01:26:24.229 NVM Command Set: Supported 01:26:24.229 Boot Partition: Not Supported 01:26:24.229 Memory Page Size Minimum: 4096 bytes 01:26:24.229 Memory Page Size Maximum: 4096 bytes 01:26:24.229 Persistent Memory Region: Not Supported 01:26:24.229 Optional Asynchronous Events Supported 01:26:24.229 Namespace Attribute Notices: Supported 01:26:24.229 Firmware Activation Notices: Not Supported 01:26:24.229 ANA Change Notices: Supported 01:26:24.229 PLE Aggregate Log Change Notices: Not Supported 01:26:24.229 LBA Status Info Alert Notices: Not Supported 01:26:24.229 EGE Aggregate Log Change Notices: Not Supported 01:26:24.229 Normal NVM Subsystem Shutdown event: Not Supported 01:26:24.229 Zone Descriptor Change Notices: Not Supported 01:26:24.229 Discovery Log Change Notices: Not Supported 01:26:24.229 Controller Attributes 01:26:24.229 128-bit Host Identifier: Supported 01:26:24.229 Non-Operational Permissive Mode: Not Supported 01:26:24.229 NVM Sets: Not Supported 01:26:24.229 Read Recovery Levels: Not Supported 01:26:24.229 Endurance Groups: Not Supported 01:26:24.229 Predictable Latency Mode: Not Supported 01:26:24.229 Traffic Based Keep ALive: Supported 01:26:24.229 Namespace Granularity: Not Supported 01:26:24.229 SQ Associations: Not Supported 01:26:24.229 UUID List: Not Supported 01:26:24.229 Multi-Domain Subsystem: Not Supported 01:26:24.229 Fixed Capacity Management: Not Supported 01:26:24.229 Variable Capacity Management: Not Supported 01:26:24.229 Delete Endurance Group: Not Supported 01:26:24.229 Delete NVM Set: Not Supported 01:26:24.229 Extended LBA Formats Supported: Not Supported 01:26:24.229 Flexible Data Placement Supported: Not Supported 01:26:24.229 01:26:24.229 Controller Memory Buffer Support 01:26:24.229 ================================ 01:26:24.229 Supported: No 01:26:24.229 01:26:24.229 Persistent Memory Region Support 01:26:24.229 ================================ 01:26:24.229 Supported: No 01:26:24.229 01:26:24.229 Admin Command Set Attributes 01:26:24.229 ============================ 01:26:24.229 Security Send/Receive: Not Supported 01:26:24.229 Format NVM: Not Supported 01:26:24.229 Firmware Activate/Download: Not Supported 01:26:24.229 Namespace Management: Not Supported 01:26:24.229 Device Self-Test: Not Supported 01:26:24.229 Directives: Not Supported 01:26:24.229 NVMe-MI: Not Supported 01:26:24.229 Virtualization Management: Not Supported 01:26:24.229 Doorbell Buffer Config: Not Supported 01:26:24.229 Get LBA Status Capability: Not Supported 01:26:24.229 Command & Feature Lockdown Capability: Not Supported 01:26:24.229 Abort Command Limit: 4 01:26:24.229 Async Event Request Limit: 4 01:26:24.229 Number of Firmware Slots: N/A 01:26:24.229 Firmware Slot 1 Read-Only: N/A 01:26:24.229 Firmware Activation Without Reset: N/A 01:26:24.229 Multiple Update Detection Support: N/A 01:26:24.229 Firmware Update Granularity: No Information Provided 01:26:24.229 Per-Namespace SMART Log: Yes 01:26:24.229 Asymmetric Namespace Access Log Page: Supported 01:26:24.229 ANA Transition Time : 10 sec 01:26:24.229 01:26:24.229 Asymmetric Namespace Access Capabilities 01:26:24.229 ANA Optimized State : Supported 01:26:24.229 ANA Non-Optimized State : Supported 01:26:24.229 ANA Inaccessible State : Supported 01:26:24.229 ANA Persistent Loss State : Supported 01:26:24.229 ANA Change State : Supported 01:26:24.229 ANAGRPID is not changed : No 01:26:24.229 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 01:26:24.229 01:26:24.229 ANA Group Identifier Maximum : 128 01:26:24.229 Number of ANA Group Identifiers : 128 01:26:24.229 Max Number of Allowed Namespaces : 1024 01:26:24.229 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 01:26:24.229 Command Effects Log Page: Supported 01:26:24.229 Get Log Page Extended Data: Supported 01:26:24.229 Telemetry Log Pages: Not Supported 01:26:24.229 Persistent Event Log Pages: Not Supported 01:26:24.229 Supported Log Pages Log Page: May Support 01:26:24.229 Commands Supported & Effects Log Page: Not Supported 01:26:24.229 Feature Identifiers & Effects Log Page:May Support 01:26:24.229 NVMe-MI Commands & Effects Log Page: May Support 01:26:24.229 Data Area 4 for Telemetry Log: Not Supported 01:26:24.229 Error Log Page Entries Supported: 128 01:26:24.229 Keep Alive: Supported 01:26:24.229 Keep Alive Granularity: 1000 ms 01:26:24.229 01:26:24.229 NVM Command Set Attributes 01:26:24.229 ========================== 01:26:24.229 Submission Queue Entry Size 01:26:24.230 Max: 64 01:26:24.230 Min: 64 01:26:24.230 Completion Queue Entry Size 01:26:24.230 Max: 16 01:26:24.230 Min: 16 01:26:24.230 Number of Namespaces: 1024 01:26:24.230 Compare Command: Not Supported 01:26:24.230 Write Uncorrectable Command: Not Supported 01:26:24.230 Dataset Management Command: Supported 01:26:24.230 Write Zeroes Command: Supported 01:26:24.230 Set Features Save Field: Not Supported 01:26:24.230 Reservations: Not Supported 01:26:24.230 Timestamp: Not Supported 01:26:24.230 Copy: Not Supported 01:26:24.230 Volatile Write Cache: Present 01:26:24.230 Atomic Write Unit (Normal): 1 01:26:24.230 Atomic Write Unit (PFail): 1 01:26:24.230 Atomic Compare & Write Unit: 1 01:26:24.230 Fused Compare & Write: Not Supported 01:26:24.230 Scatter-Gather List 01:26:24.230 SGL Command Set: Supported 01:26:24.230 SGL Keyed: Not Supported 01:26:24.230 SGL Bit Bucket Descriptor: Not Supported 01:26:24.230 SGL Metadata Pointer: Not Supported 01:26:24.230 Oversized SGL: Not Supported 01:26:24.230 SGL Metadata Address: Not Supported 01:26:24.230 SGL Offset: Supported 01:26:24.230 Transport SGL Data Block: Not Supported 01:26:24.230 Replay Protected Memory Block: Not Supported 01:26:24.230 01:26:24.230 Firmware Slot Information 01:26:24.230 ========================= 01:26:24.230 Active slot: 0 01:26:24.230 01:26:24.230 Asymmetric Namespace Access 01:26:24.230 =========================== 01:26:24.230 Change Count : 0 01:26:24.230 Number of ANA Group Descriptors : 1 01:26:24.230 ANA Group Descriptor : 0 01:26:24.230 ANA Group ID : 1 01:26:24.230 Number of NSID Values : 1 01:26:24.230 Change Count : 0 01:26:24.230 ANA State : 1 01:26:24.230 Namespace Identifier : 1 01:26:24.230 01:26:24.230 Commands Supported and Effects 01:26:24.230 ============================== 01:26:24.230 Admin Commands 01:26:24.230 -------------- 01:26:24.230 Get Log Page (02h): Supported 01:26:24.230 Identify (06h): Supported 01:26:24.230 Abort (08h): Supported 01:26:24.230 Set Features (09h): Supported 01:26:24.230 Get Features (0Ah): Supported 01:26:24.230 Asynchronous Event Request (0Ch): Supported 01:26:24.230 Keep Alive (18h): Supported 01:26:24.230 I/O Commands 01:26:24.230 ------------ 01:26:24.230 Flush (00h): Supported 01:26:24.230 Write (01h): Supported LBA-Change 01:26:24.230 Read (02h): Supported 01:26:24.230 Write Zeroes (08h): Supported LBA-Change 01:26:24.230 Dataset Management (09h): Supported 01:26:24.230 01:26:24.230 Error Log 01:26:24.230 ========= 01:26:24.230 Entry: 0 01:26:24.230 Error Count: 0x3 01:26:24.230 Submission Queue Id: 0x0 01:26:24.230 Command Id: 0x5 01:26:24.230 Phase Bit: 0 01:26:24.230 Status Code: 0x2 01:26:24.230 Status Code Type: 0x0 01:26:24.230 Do Not Retry: 1 01:26:24.230 Error Location: 0x28 01:26:24.230 LBA: 0x0 01:26:24.230 Namespace: 0x0 01:26:24.230 Vendor Log Page: 0x0 01:26:24.230 ----------- 01:26:24.230 Entry: 1 01:26:24.230 Error Count: 0x2 01:26:24.230 Submission Queue Id: 0x0 01:26:24.230 Command Id: 0x5 01:26:24.230 Phase Bit: 0 01:26:24.230 Status Code: 0x2 01:26:24.230 Status Code Type: 0x0 01:26:24.230 Do Not Retry: 1 01:26:24.230 Error Location: 0x28 01:26:24.230 LBA: 0x0 01:26:24.230 Namespace: 0x0 01:26:24.230 Vendor Log Page: 0x0 01:26:24.230 ----------- 01:26:24.230 Entry: 2 01:26:24.230 Error Count: 0x1 01:26:24.230 Submission Queue Id: 0x0 01:26:24.230 Command Id: 0x4 01:26:24.230 Phase Bit: 0 01:26:24.230 Status Code: 0x2 01:26:24.230 Status Code Type: 0x0 01:26:24.230 Do Not Retry: 1 01:26:24.230 Error Location: 0x28 01:26:24.230 LBA: 0x0 01:26:24.230 Namespace: 0x0 01:26:24.230 Vendor Log Page: 0x0 01:26:24.230 01:26:24.230 Number of Queues 01:26:24.230 ================ 01:26:24.230 Number of I/O Submission Queues: 128 01:26:24.230 Number of I/O Completion Queues: 128 01:26:24.230 01:26:24.230 ZNS Specific Controller Data 01:26:24.230 ============================ 01:26:24.230 Zone Append Size Limit: 0 01:26:24.230 01:26:24.230 01:26:24.230 Active Namespaces 01:26:24.230 ================= 01:26:24.230 get_feature(0x05) failed 01:26:24.230 Namespace ID:1 01:26:24.230 Command Set Identifier: NVM (00h) 01:26:24.230 Deallocate: Supported 01:26:24.230 Deallocated/Unwritten Error: Not Supported 01:26:24.230 Deallocated Read Value: Unknown 01:26:24.230 Deallocate in Write Zeroes: Not Supported 01:26:24.230 Deallocated Guard Field: 0xFFFF 01:26:24.230 Flush: Supported 01:26:24.230 Reservation: Not Supported 01:26:24.230 Namespace Sharing Capabilities: Multiple Controllers 01:26:24.230 Size (in LBAs): 1310720 (5GiB) 01:26:24.230 Capacity (in LBAs): 1310720 (5GiB) 01:26:24.230 Utilization (in LBAs): 1310720 (5GiB) 01:26:24.230 UUID: f97059e3-a02e-4842-bc4c-b28c4d45198a 01:26:24.230 Thin Provisioning: Not Supported 01:26:24.230 Per-NS Atomic Units: Yes 01:26:24.230 Atomic Boundary Size (Normal): 0 01:26:24.230 Atomic Boundary Size (PFail): 0 01:26:24.230 Atomic Boundary Offset: 0 01:26:24.230 NGUID/EUI64 Never Reused: No 01:26:24.230 ANA group ID: 1 01:26:24.230 Namespace Write Protected: No 01:26:24.230 Number of LBA Formats: 1 01:26:24.230 Current LBA Format: LBA Format #00 01:26:24.230 LBA Format #00: Data Size: 4096 Metadata Size: 0 01:26:24.230 01:26:24.230 05:21:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 01:26:24.230 05:21:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 01:26:24.230 05:21:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 01:26:24.230 05:21:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:26:24.230 05:21:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 01:26:24.230 05:21:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 01:26:24.230 05:21:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:26:24.230 rmmod nvme_tcp 01:26:24.230 rmmod nvme_fabrics 01:26:24.230 05:21:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:26:24.230 05:21:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 01:26:24.230 05:21:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 01:26:24.230 05:21:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 01:26:24.230 05:21:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:26:24.230 05:21:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:26:24.230 05:21:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:26:24.230 05:21:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 01:26:24.230 05:21:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 01:26:24.230 05:21:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:26:24.230 05:21:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 01:26:24.230 05:21:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:26:24.230 05:21:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:26:24.230 05:21:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:26:24.489 05:21:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:26:24.489 05:21:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:26:24.490 05:21:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:26:24.490 05:21:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:26:24.490 05:21:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:26:24.490 05:21:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:26:24.490 05:21:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:26:24.490 05:21:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:26:24.490 05:21:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:26:24.490 05:21:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:26:24.490 05:21:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:26:24.490 05:21:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:26:24.490 05:21:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 01:26:24.490 05:21:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:26:24.490 05:21:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:26:24.490 05:21:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:26:24.490 05:21:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 01:26:24.490 05:21:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 01:26:24.490 05:21:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 01:26:24.490 05:21:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 01:26:24.749 05:21:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 01:26:24.749 05:21:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 01:26:24.749 05:21:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 01:26:24.749 05:21:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 01:26:24.749 05:21:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 01:26:24.749 05:21:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 01:26:24.749 05:21:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:26:25.318 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:26:25.577 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 01:26:25.577 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 01:26:25.577 ************************************ 01:26:25.577 END TEST nvmf_identify_kernel_target 01:26:25.577 ************************************ 01:26:25.577 01:26:25.577 real 0m3.772s 01:26:25.577 user 0m1.357s 01:26:25.577 sys 0m1.865s 01:26:25.577 05:21:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 01:26:25.577 05:21:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 01:26:25.835 05:21:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 01:26:25.835 05:21:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:26:25.835 05:21:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 01:26:25.835 05:21:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 01:26:25.835 ************************************ 01:26:25.835 START TEST nvmf_auth_host 01:26:25.835 ************************************ 01:26:25.835 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 01:26:25.835 * Looking for test storage... 01:26:25.835 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:26:25.835 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:26:25.835 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 01:26:25.835 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:26:25.835 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:26:25.835 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:26:25.835 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 01:26:25.835 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 01:26:25.835 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 01:26:25.835 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 01:26:25.835 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 01:26:25.835 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 01:26:25.835 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 01:26:25.835 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 01:26:25.835 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 01:26:25.835 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:26:25.835 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 01:26:25.835 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 01:26:25.835 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 01:26:25.835 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:26:25.835 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 01:26:25.835 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 01:26:25.835 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:26:25.835 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 01:26:25.835 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 01:26:25.835 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 01:26:25.836 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 01:26:25.836 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:26:25.836 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 01:26:25.836 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 01:26:25.836 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:26:25.836 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:26:25.836 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 01:26:25.836 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:26:25.836 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:26:25.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:26:25.836 --rc genhtml_branch_coverage=1 01:26:25.836 --rc genhtml_function_coverage=1 01:26:25.836 --rc genhtml_legend=1 01:26:25.836 --rc geninfo_all_blocks=1 01:26:25.836 --rc geninfo_unexecuted_blocks=1 01:26:25.836 01:26:25.836 ' 01:26:25.836 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:26:25.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:26:25.836 --rc genhtml_branch_coverage=1 01:26:25.836 --rc genhtml_function_coverage=1 01:26:25.836 --rc genhtml_legend=1 01:26:25.836 --rc geninfo_all_blocks=1 01:26:25.836 --rc geninfo_unexecuted_blocks=1 01:26:25.836 01:26:25.836 ' 01:26:25.836 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:26:25.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:26:25.836 --rc genhtml_branch_coverage=1 01:26:25.836 --rc genhtml_function_coverage=1 01:26:25.836 --rc genhtml_legend=1 01:26:25.836 --rc geninfo_all_blocks=1 01:26:25.836 --rc geninfo_unexecuted_blocks=1 01:26:25.836 01:26:25.836 ' 01:26:26.096 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:26:26.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:26:26.096 --rc genhtml_branch_coverage=1 01:26:26.096 --rc genhtml_function_coverage=1 01:26:26.096 --rc genhtml_legend=1 01:26:26.096 --rc geninfo_all_blocks=1 01:26:26.096 --rc geninfo_unexecuted_blocks=1 01:26:26.096 01:26:26.096 ' 01:26:26.096 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:26:26.096 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 01:26:26.096 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:26:26.096 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:26:26.096 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:26:26.096 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:26:26.096 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:26:26.096 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:26:26.096 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:26:26.096 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:26:26.096 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:26:26.096 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:26:26.096 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:26:26.096 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:26:26.096 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:26:26.096 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:26:26.096 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:26:26.096 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:26:26.096 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:26:26.096 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 01:26:26.096 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:26:26.096 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:26:26.096 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:26:26.096 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:26:26.096 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:26:26.096 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:26:26.096 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 01:26:26.096 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:26:26.096 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 01:26:26.096 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:26:26.096 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:26:26.096 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:26:26.096 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:26:26.096 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:26:26.096 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:26:26.096 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:26:26.096 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:26:26.096 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:26:26.096 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 01:26:26.096 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 01:26:26.096 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 01:26:26.096 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 01:26:26.096 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 01:26:26.096 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 01:26:26.096 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 01:26:26.096 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 01:26:26.096 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 01:26:26.096 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 01:26:26.096 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:26:26.096 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:26:26.096 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 01:26:26.096 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 01:26:26.096 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 01:26:26.096 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:26:26.096 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:26:26.096 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:26:26.096 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:26:26.096 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:26:26.096 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:26:26.096 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:26:26.096 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:26:26.096 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # nvmf_veth_init 01:26:26.096 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:26:26.096 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:26:26.096 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:26:26.096 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:26:26.096 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:26:26.096 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:26:26.097 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:26:26.097 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:26:26.097 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:26:26.097 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:26:26.097 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:26:26.097 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:26:26.097 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:26:26.097 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:26:26.097 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:26:26.097 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:26:26.097 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:26:26.097 Cannot find device "nvmf_init_br" 01:26:26.097 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 01:26:26.097 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:26:26.097 Cannot find device "nvmf_init_br2" 01:26:26.097 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 01:26:26.097 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:26:26.097 Cannot find device "nvmf_tgt_br" 01:26:26.097 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 01:26:26.097 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:26:26.097 Cannot find device "nvmf_tgt_br2" 01:26:26.097 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 01:26:26.097 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:26:26.097 Cannot find device "nvmf_init_br" 01:26:26.097 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 01:26:26.097 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:26:26.097 Cannot find device "nvmf_init_br2" 01:26:26.097 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 01:26:26.097 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:26:26.097 Cannot find device "nvmf_tgt_br" 01:26:26.097 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 01:26:26.097 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:26:26.097 Cannot find device "nvmf_tgt_br2" 01:26:26.097 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 01:26:26.097 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:26:26.097 Cannot find device "nvmf_br" 01:26:26.097 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 01:26:26.097 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:26:26.097 Cannot find device "nvmf_init_if" 01:26:26.097 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 01:26:26.097 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:26:26.097 Cannot find device "nvmf_init_if2" 01:26:26.097 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 01:26:26.097 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:26:26.097 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:26:26.097 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 01:26:26.097 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:26:26.097 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:26:26.097 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 01:26:26.097 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:26:26.097 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:26:26.358 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:26:26.358 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:26:26.358 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:26:26.358 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:26:26.358 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:26:26.358 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:26:26.358 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:26:26.358 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:26:26.358 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:26:26.358 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:26:26.358 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:26:26.358 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:26:26.358 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:26:26.358 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:26:26.358 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:26:26.358 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:26:26.358 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:26:26.358 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:26:26.358 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:26:26.358 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:26:26.358 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:26:26.358 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:26:26.358 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:26:26.358 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:26:26.358 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:26:26.358 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:26:26.358 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:26:26.358 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:26:26.358 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:26:26.358 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:26:26.358 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:26:26.358 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:26:26.358 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.086 ms 01:26:26.358 01:26:26.358 --- 10.0.0.3 ping statistics --- 01:26:26.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:26:26.358 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 01:26:26.358 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:26:26.358 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:26:26.358 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.072 ms 01:26:26.358 01:26:26.358 --- 10.0.0.4 ping statistics --- 01:26:26.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:26:26.358 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 01:26:26.358 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:26:26.358 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:26:26.358 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 01:26:26.358 01:26:26.358 --- 10.0.0.1 ping statistics --- 01:26:26.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:26:26.358 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 01:26:26.358 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:26:26.358 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:26:26.358 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 01:26:26.358 01:26:26.358 --- 10.0.0.2 ping statistics --- 01:26:26.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:26:26.358 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 01:26:26.358 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:26:26.358 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # return 0 01:26:26.358 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:26:26.358 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:26:26.358 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:26:26.358 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:26:26.358 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:26:26.358 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:26:26.358 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:26:26.358 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 01:26:26.358 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:26:26.358 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 01:26:26.358 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:26.618 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=78202 01:26:26.618 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 01:26:26.618 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 78202 01:26:26.618 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 78202 ']' 01:26:26.618 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:26:26.618 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 01:26:26.618 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:26:26.618 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 01:26:26.618 05:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:27.556 05:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:26:27.556 05:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 01:26:27.556 05:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:26:27.556 05:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 01:26:27.556 05:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:27.556 05:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:26:27.556 05:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 01:26:27.556 05:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 01:26:27.556 05:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 01:26:27.556 05:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:26:27.556 05:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 01:26:27.556 05:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 01:26:27.556 05:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 01:26:27.556 05:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 01:26:27.556 05:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d898d548c5523d3da4c16c640f0bd858 01:26:27.556 05:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 01:26:27.556 05:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.CKZ 01:26:27.556 05:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d898d548c5523d3da4c16c640f0bd858 0 01:26:27.556 05:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d898d548c5523d3da4c16c640f0bd858 0 01:26:27.556 05:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 01:26:27.556 05:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 01:26:27.556 05:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d898d548c5523d3da4c16c640f0bd858 01:26:27.556 05:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 01:26:27.556 05:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 01:26:27.556 05:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.CKZ 01:26:27.556 05:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.CKZ 01:26:27.556 05:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.CKZ 01:26:27.556 05:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 01:26:27.556 05:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 01:26:27.556 05:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:26:27.556 05:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 01:26:27.556 05:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 01:26:27.556 05:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 01:26:27.556 05:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 01:26:27.556 05:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=096eda0f90d82f513838d797b433b5c7148c254c05c17aa6e46098d1b2335106 01:26:27.556 05:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 01:26:27.556 05:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.5iD 01:26:27.556 05:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 096eda0f90d82f513838d797b433b5c7148c254c05c17aa6e46098d1b2335106 3 01:26:27.556 05:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 096eda0f90d82f513838d797b433b5c7148c254c05c17aa6e46098d1b2335106 3 01:26:27.556 05:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 01:26:27.556 05:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 01:26:27.556 05:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=096eda0f90d82f513838d797b433b5c7148c254c05c17aa6e46098d1b2335106 01:26:27.556 05:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 01:26:27.557 05:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 01:26:27.557 05:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.5iD 01:26:27.557 05:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.5iD 01:26:27.557 05:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.5iD 01:26:27.557 05:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 01:26:27.557 05:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 01:26:27.557 05:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:26:27.557 05:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 01:26:27.557 05:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 01:26:27.557 05:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 01:26:27.557 05:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 01:26:27.557 05:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=dcfd06a9b721afb0b19b72704234d0b0499e3c11be495820 01:26:27.557 05:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 01:26:27.557 05:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.OmX 01:26:27.557 05:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key dcfd06a9b721afb0b19b72704234d0b0499e3c11be495820 0 01:26:27.557 05:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 dcfd06a9b721afb0b19b72704234d0b0499e3c11be495820 0 01:26:27.557 05:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 01:26:27.557 05:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 01:26:27.557 05:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=dcfd06a9b721afb0b19b72704234d0b0499e3c11be495820 01:26:27.557 05:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 01:26:27.557 05:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 01:26:27.557 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.OmX 01:26:27.557 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.OmX 01:26:27.557 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.OmX 01:26:27.815 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 01:26:27.815 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 01:26:27.815 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:26:27.815 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 01:26:27.815 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 01:26:27.815 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 01:26:27.815 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 01:26:27.815 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ce2220827b3fde97e93b4a41063aedfd7dd15b30fd435507 01:26:27.815 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 01:26:27.815 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.i5n 01:26:27.815 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ce2220827b3fde97e93b4a41063aedfd7dd15b30fd435507 2 01:26:27.815 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ce2220827b3fde97e93b4a41063aedfd7dd15b30fd435507 2 01:26:27.815 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 01:26:27.815 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 01:26:27.815 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ce2220827b3fde97e93b4a41063aedfd7dd15b30fd435507 01:26:27.815 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 01:26:27.815 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 01:26:27.815 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.i5n 01:26:27.815 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.i5n 01:26:27.815 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.i5n 01:26:27.815 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 01:26:27.815 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 01:26:27.815 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:26:27.815 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 01:26:27.815 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 01:26:27.815 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 01:26:27.815 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 01:26:27.815 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a8e0242aa003f77db96284f193a37f2b 01:26:27.815 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 01:26:27.815 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.unW 01:26:27.815 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a8e0242aa003f77db96284f193a37f2b 1 01:26:27.815 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a8e0242aa003f77db96284f193a37f2b 1 01:26:27.815 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 01:26:27.815 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 01:26:27.815 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a8e0242aa003f77db96284f193a37f2b 01:26:27.815 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 01:26:27.815 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 01:26:27.815 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.unW 01:26:27.815 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.unW 01:26:27.815 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.unW 01:26:27.815 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 01:26:27.815 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 01:26:27.815 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:26:27.815 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 01:26:27.815 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 01:26:27.815 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 01:26:27.815 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 01:26:27.815 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=8a884bd80ceef3841a016e62393162c6 01:26:27.815 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 01:26:27.815 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.TGN 01:26:27.815 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 8a884bd80ceef3841a016e62393162c6 1 01:26:27.815 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 8a884bd80ceef3841a016e62393162c6 1 01:26:27.815 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 01:26:27.815 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 01:26:27.815 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=8a884bd80ceef3841a016e62393162c6 01:26:27.815 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 01:26:27.815 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 01:26:27.815 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.TGN 01:26:27.815 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.TGN 01:26:27.815 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.TGN 01:26:27.815 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 01:26:27.815 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 01:26:27.815 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:26:27.815 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 01:26:27.815 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 01:26:27.815 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 01:26:27.815 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 01:26:27.815 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f60b8a7973ec3d9ed4a4e1f839a7e2e3d1f32abb29a7907a 01:26:27.816 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 01:26:27.816 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.6Cm 01:26:27.816 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f60b8a7973ec3d9ed4a4e1f839a7e2e3d1f32abb29a7907a 2 01:26:27.816 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f60b8a7973ec3d9ed4a4e1f839a7e2e3d1f32abb29a7907a 2 01:26:27.816 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 01:26:27.816 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 01:26:27.816 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f60b8a7973ec3d9ed4a4e1f839a7e2e3d1f32abb29a7907a 01:26:27.816 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 01:26:27.816 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 01:26:28.075 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.6Cm 01:26:28.075 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.6Cm 01:26:28.075 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.6Cm 01:26:28.075 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 01:26:28.075 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 01:26:28.075 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:26:28.075 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 01:26:28.075 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 01:26:28.075 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 01:26:28.075 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 01:26:28.075 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a2341787c04d3360bd6a5a407e30c3bf 01:26:28.075 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 01:26:28.075 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.zjn 01:26:28.075 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a2341787c04d3360bd6a5a407e30c3bf 0 01:26:28.075 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a2341787c04d3360bd6a5a407e30c3bf 0 01:26:28.075 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 01:26:28.075 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 01:26:28.075 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a2341787c04d3360bd6a5a407e30c3bf 01:26:28.075 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 01:26:28.075 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 01:26:28.075 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.zjn 01:26:28.075 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.zjn 01:26:28.075 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.zjn 01:26:28.075 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 01:26:28.075 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 01:26:28.075 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:26:28.075 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 01:26:28.075 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 01:26:28.075 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 01:26:28.075 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 01:26:28.075 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=beb02aa18aef23dc8328b8cf42a737dc7a9c975385d7937217d05107c7f08c42 01:26:28.075 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 01:26:28.075 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.1cs 01:26:28.075 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key beb02aa18aef23dc8328b8cf42a737dc7a9c975385d7937217d05107c7f08c42 3 01:26:28.075 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 beb02aa18aef23dc8328b8cf42a737dc7a9c975385d7937217d05107c7f08c42 3 01:26:28.075 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 01:26:28.075 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 01:26:28.075 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=beb02aa18aef23dc8328b8cf42a737dc7a9c975385d7937217d05107c7f08c42 01:26:28.075 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 01:26:28.075 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 01:26:28.075 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.1cs 01:26:28.075 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.1cs 01:26:28.075 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.1cs 01:26:28.075 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 01:26:28.075 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 78202 01:26:28.075 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 78202 ']' 01:26:28.075 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:26:28.075 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 01:26:28.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:26:28.075 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:26:28.075 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 01:26:28.075 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:28.334 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:26:28.334 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 01:26:28.334 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 01:26:28.334 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.CKZ 01:26:28.334 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:28.334 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:28.334 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:28.335 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.5iD ]] 01:26:28.335 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.5iD 01:26:28.335 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:28.335 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:28.335 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:28.335 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 01:26:28.335 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.OmX 01:26:28.335 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:28.335 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:28.335 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:28.335 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.i5n ]] 01:26:28.335 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.i5n 01:26:28.335 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:28.335 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:28.335 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:28.335 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 01:26:28.335 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.unW 01:26:28.335 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:28.335 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:28.335 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:28.335 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.TGN ]] 01:26:28.335 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.TGN 01:26:28.335 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:28.335 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:28.335 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:28.335 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 01:26:28.335 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.6Cm 01:26:28.335 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:28.335 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:28.335 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:28.335 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.zjn ]] 01:26:28.335 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.zjn 01:26:28.335 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:28.335 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:28.335 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:28.335 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 01:26:28.335 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.1cs 01:26:28.335 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:28.335 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:28.335 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:28.335 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 01:26:28.335 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 01:26:28.335 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 01:26:28.335 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:26:28.335 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:28.335 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:28.335 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:28.335 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:28.335 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:28.335 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:28.335 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:28.335 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:28.335 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:28.335 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 01:26:28.335 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 01:26:28.335 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 01:26:28.335 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 01:26:28.335 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 01:26:28.335 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 01:26:28.335 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 01:26:28.335 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 01:26:28.335 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 01:26:28.593 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 01:26:28.593 05:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 01:26:28.851 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:26:28.851 Waiting for block devices as requested 01:26:29.110 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 01:26:29.110 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 01:26:30.046 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 01:26:30.046 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 01:26:30.046 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 01:26:30.046 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 01:26:30.046 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 01:26:30.046 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:26:30.046 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 01:26:30.046 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 01:26:30.046 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 01:26:30.046 No valid GPT data, bailing 01:26:30.046 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 01:26:30.046 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 01:26:30.046 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 01:26:30.046 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 01:26:30.046 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 01:26:30.046 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 01:26:30.046 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 01:26:30.046 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n2 01:26:30.046 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 01:26:30.046 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:26:30.046 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n2 01:26:30.046 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 01:26:30.046 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 01:26:30.046 No valid GPT data, bailing 01:26:30.046 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 01:26:30.046 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 01:26:30.046 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 01:26:30.046 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 01:26:30.046 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 01:26:30.046 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 01:26:30.046 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 01:26:30.046 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n3 01:26:30.046 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 01:26:30.046 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:26:30.046 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n3 01:26:30.046 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 01:26:30.046 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 01:26:30.046 No valid GPT data, bailing 01:26:30.046 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 01:26:30.046 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 01:26:30.046 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 01:26:30.046 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 01:26:30.047 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 01:26:30.047 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 01:26:30.047 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 01:26:30.047 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme1n1 01:26:30.047 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 01:26:30.047 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:26:30.047 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 01:26:30.047 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 01:26:30.047 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 01:26:30.047 No valid GPT data, bailing 01:26:30.047 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 01:26:30.047 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 01:26:30.047 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 01:26:30.047 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 01:26:30.047 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 01:26:30.047 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 01:26:30.047 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 01:26:30.047 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 01:26:30.047 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 01:26:30.047 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 01:26:30.047 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 01:26:30.047 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 01:26:30.047 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 01:26:30.047 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 01:26:30.047 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 01:26:30.047 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 01:26:30.047 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 01:26:30.047 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid=0567fff2-ddaf-4a4f-877d-a2600d7e662b -a 10.0.0.1 -t tcp -s 4420 01:26:30.306 01:26:30.306 Discovery Log Number of Records 2, Generation counter 2 01:26:30.306 =====Discovery Log Entry 0====== 01:26:30.306 trtype: tcp 01:26:30.306 adrfam: ipv4 01:26:30.306 subtype: current discovery subsystem 01:26:30.306 treq: not specified, sq flow control disable supported 01:26:30.306 portid: 1 01:26:30.306 trsvcid: 4420 01:26:30.306 subnqn: nqn.2014-08.org.nvmexpress.discovery 01:26:30.306 traddr: 10.0.0.1 01:26:30.306 eflags: none 01:26:30.306 sectype: none 01:26:30.306 =====Discovery Log Entry 1====== 01:26:30.306 trtype: tcp 01:26:30.306 adrfam: ipv4 01:26:30.306 subtype: nvme subsystem 01:26:30.306 treq: not specified, sq flow control disable supported 01:26:30.306 portid: 1 01:26:30.306 trsvcid: 4420 01:26:30.306 subnqn: nqn.2024-02.io.spdk:cnode0 01:26:30.306 traddr: 10.0.0.1 01:26:30.306 eflags: none 01:26:30.306 sectype: none 01:26:30.306 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 01:26:30.306 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 01:26:30.306 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 01:26:30.306 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 01:26:30.306 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:26:30.306 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:26:30.306 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:26:30.306 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:26:30.306 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGNmZDA2YTliNzIxYWZiMGIxOWI3MjcwNDIzNGQwYjA0OTllM2MxMWJlNDk1ODIw3m40Mg==: 01:26:30.306 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2UyMjIwODI3YjNmZGU5N2U5M2I0YTQxMDYzYWVkZmQ3ZGQxNWIzMGZkNDM1NTA3Mp/OWA==: 01:26:30.306 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:26:30.306 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:26:30.306 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGNmZDA2YTliNzIxYWZiMGIxOWI3MjcwNDIzNGQwYjA0OTllM2MxMWJlNDk1ODIw3m40Mg==: 01:26:30.307 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2UyMjIwODI3YjNmZGU5N2U5M2I0YTQxMDYzYWVkZmQ3ZGQxNWIzMGZkNDM1NTA3Mp/OWA==: ]] 01:26:30.307 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2UyMjIwODI3YjNmZGU5N2U5M2I0YTQxMDYzYWVkZmQ3ZGQxNWIzMGZkNDM1NTA3Mp/OWA==: 01:26:30.307 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 01:26:30.307 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 01:26:30.307 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 01:26:30.307 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 01:26:30.307 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 01:26:30.307 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:26:30.307 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 01:26:30.307 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 01:26:30.307 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:26:30.307 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:26:30.307 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 01:26:30.307 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:30.307 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:30.307 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:30.307 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:26:30.307 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:26:30.307 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:30.307 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:30.307 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:30.307 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:30.307 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:30.307 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:30.307 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:30.307 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:30.307 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:30.307 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:26:30.307 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:30.307 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:30.307 nvme0n1 01:26:30.307 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:30.307 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:26:30.307 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:30.307 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:26:30.307 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:30.567 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:30.567 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:26:30.567 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:26:30.567 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:30.567 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:30.567 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:30.567 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 01:26:30.567 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:26:30.567 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:26:30.567 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 01:26:30.567 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:26:30.567 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:26:30.567 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:26:30.567 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:26:30.567 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDg5OGQ1NDhjNTUyM2QzZGE0YzE2YzY0MGYwYmQ4NTiu9ens: 01:26:30.567 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDk2ZWRhMGY5MGQ4MmY1MTM4MzhkNzk3YjQzM2I1YzcxNDhjMjU0YzA1YzE3YWE2ZTQ2MDk4ZDFiMjMzNTEwNmjUH6E=: 01:26:30.567 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:26:30.567 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:26:30.567 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDg5OGQ1NDhjNTUyM2QzZGE0YzE2YzY0MGYwYmQ4NTiu9ens: 01:26:30.567 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDk2ZWRhMGY5MGQ4MmY1MTM4MzhkNzk3YjQzM2I1YzcxNDhjMjU0YzA1YzE3YWE2ZTQ2MDk4ZDFiMjMzNTEwNmjUH6E=: ]] 01:26:30.567 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDk2ZWRhMGY5MGQ4MmY1MTM4MzhkNzk3YjQzM2I1YzcxNDhjMjU0YzA1YzE3YWE2ZTQ2MDk4ZDFiMjMzNTEwNmjUH6E=: 01:26:30.567 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 01:26:30.567 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:26:30.567 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:26:30.567 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:26:30.567 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:26:30.567 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:26:30.567 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:26:30.567 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:30.567 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:30.567 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:30.567 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:26:30.567 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:26:30.567 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:30.567 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:30.567 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:30.567 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:30.567 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:30.567 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:30.567 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:30.567 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:30.567 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:30.567 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:26:30.567 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:30.567 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:30.567 nvme0n1 01:26:30.567 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:30.567 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:26:30.567 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:26:30.567 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:30.567 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:30.567 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:30.567 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:26:30.567 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:26:30.567 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:30.567 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:30.567 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:30.567 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:26:30.567 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 01:26:30.567 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:26:30.567 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:26:30.567 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:26:30.567 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:26:30.567 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGNmZDA2YTliNzIxYWZiMGIxOWI3MjcwNDIzNGQwYjA0OTllM2MxMWJlNDk1ODIw3m40Mg==: 01:26:30.567 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2UyMjIwODI3YjNmZGU5N2U5M2I0YTQxMDYzYWVkZmQ3ZGQxNWIzMGZkNDM1NTA3Mp/OWA==: 01:26:30.567 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:26:30.567 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:26:30.567 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGNmZDA2YTliNzIxYWZiMGIxOWI3MjcwNDIzNGQwYjA0OTllM2MxMWJlNDk1ODIw3m40Mg==: 01:26:30.567 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2UyMjIwODI3YjNmZGU5N2U5M2I0YTQxMDYzYWVkZmQ3ZGQxNWIzMGZkNDM1NTA3Mp/OWA==: ]] 01:26:30.567 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2UyMjIwODI3YjNmZGU5N2U5M2I0YTQxMDYzYWVkZmQ3ZGQxNWIzMGZkNDM1NTA3Mp/OWA==: 01:26:30.567 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 01:26:30.567 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:26:30.567 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:26:30.567 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:26:30.567 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:26:30.567 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:26:30.567 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:26:30.567 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:30.567 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:30.567 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:30.567 05:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:26:30.567 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:26:30.567 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:30.567 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:30.567 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:30.567 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:30.567 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:30.567 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:30.567 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:30.567 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:30.568 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:30.568 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:26:30.568 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:30.568 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:30.826 nvme0n1 01:26:30.826 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:30.826 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:26:30.826 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:26:30.826 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:30.826 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:30.826 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:30.826 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:26:30.826 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:26:30.826 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:30.826 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:30.826 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:30.826 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:26:30.826 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 01:26:30.826 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:26:30.826 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:26:30.826 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:26:30.827 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:26:30.827 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YThlMDI0MmFhMDAzZjc3ZGI5NjI4NGYxOTNhMzdmMmLmgnvi: 01:26:30.827 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGE4ODRiZDgwY2VlZjM4NDFhMDE2ZTYyMzkzMTYyYzakYML0: 01:26:30.827 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:26:30.827 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:26:30.827 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YThlMDI0MmFhMDAzZjc3ZGI5NjI4NGYxOTNhMzdmMmLmgnvi: 01:26:30.827 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGE4ODRiZDgwY2VlZjM4NDFhMDE2ZTYyMzkzMTYyYzakYML0: ]] 01:26:30.827 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGE4ODRiZDgwY2VlZjM4NDFhMDE2ZTYyMzkzMTYyYzakYML0: 01:26:30.827 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 01:26:30.827 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:26:30.827 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:26:30.827 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:26:30.827 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:26:30.827 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:26:30.827 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:26:30.827 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:30.827 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:30.827 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:30.827 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:26:30.827 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:26:30.827 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:30.827 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:30.827 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:30.827 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:30.827 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:30.827 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:30.827 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:30.827 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:30.827 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:30.827 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:26:30.827 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:30.827 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:31.100 nvme0n1 01:26:31.100 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:31.100 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:26:31.100 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:26:31.100 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:31.100 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:31.100 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:31.100 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:26:31.100 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:26:31.100 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:31.100 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:31.100 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:31.100 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:26:31.100 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 01:26:31.100 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:26:31.100 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:26:31.100 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:26:31.100 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:26:31.100 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjYwYjhhNzk3M2VjM2Q5ZWQ0YTRlMWY4MzlhN2UyZTNkMWYzMmFiYjI5YTc5MDdhoHn+8w==: 01:26:31.100 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTIzNDE3ODdjMDRkMzM2MGJkNmE1YTQwN2UzMGMzYmYMWQfu: 01:26:31.100 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:26:31.100 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:26:31.100 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjYwYjhhNzk3M2VjM2Q5ZWQ0YTRlMWY4MzlhN2UyZTNkMWYzMmFiYjI5YTc5MDdhoHn+8w==: 01:26:31.100 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTIzNDE3ODdjMDRkMzM2MGJkNmE1YTQwN2UzMGMzYmYMWQfu: ]] 01:26:31.100 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTIzNDE3ODdjMDRkMzM2MGJkNmE1YTQwN2UzMGMzYmYMWQfu: 01:26:31.100 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 01:26:31.100 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:26:31.100 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:26:31.100 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:26:31.100 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:26:31.100 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:26:31.100 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:26:31.100 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:31.100 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:31.100 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:31.100 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:26:31.100 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:26:31.100 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:31.100 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:31.100 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:31.100 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:31.100 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:31.100 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:31.100 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:31.100 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:31.100 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:31.100 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:26:31.100 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:31.100 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:31.100 nvme0n1 01:26:31.100 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:31.100 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:26:31.100 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:31.100 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:31.100 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:26:31.100 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:31.100 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:26:31.100 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:26:31.100 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:31.100 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:31.100 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:31.100 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:26:31.100 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 01:26:31.100 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:26:31.100 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:26:31.100 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:26:31.100 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:26:31.100 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmViMDJhYTE4YWVmMjNkYzgzMjhiOGNmNDJhNzM3ZGM3YTljOTc1Mzg1ZDc5MzcyMTdkMDUxMDdjN2YwOGM0MjfIqi4=: 01:26:31.100 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:26:31.100 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:26:31.100 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:26:31.100 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmViMDJhYTE4YWVmMjNkYzgzMjhiOGNmNDJhNzM3ZGM3YTljOTc1Mzg1ZDc5MzcyMTdkMDUxMDdjN2YwOGM0MjfIqi4=: 01:26:31.100 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:26:31.100 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 01:26:31.100 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:26:31.101 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:26:31.101 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:26:31.101 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:26:31.101 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:26:31.101 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:26:31.101 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:31.101 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:31.101 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:31.101 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:26:31.101 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:26:31.101 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:31.101 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:31.101 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:31.101 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:31.101 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:31.101 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:31.101 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:31.101 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:31.101 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:31.101 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:26:31.101 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:31.101 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:31.359 nvme0n1 01:26:31.359 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:31.359 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:26:31.359 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:26:31.359 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:31.359 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:31.359 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:31.359 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:26:31.359 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:26:31.359 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:31.359 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:31.359 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:31.359 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:26:31.359 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:26:31.359 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 01:26:31.359 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:26:31.359 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:26:31.359 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:26:31.359 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:26:31.359 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDg5OGQ1NDhjNTUyM2QzZGE0YzE2YzY0MGYwYmQ4NTiu9ens: 01:26:31.359 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDk2ZWRhMGY5MGQ4MmY1MTM4MzhkNzk3YjQzM2I1YzcxNDhjMjU0YzA1YzE3YWE2ZTQ2MDk4ZDFiMjMzNTEwNmjUH6E=: 01:26:31.359 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:26:31.359 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:26:31.617 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDg5OGQ1NDhjNTUyM2QzZGE0YzE2YzY0MGYwYmQ4NTiu9ens: 01:26:31.617 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDk2ZWRhMGY5MGQ4MmY1MTM4MzhkNzk3YjQzM2I1YzcxNDhjMjU0YzA1YzE3YWE2ZTQ2MDk4ZDFiMjMzNTEwNmjUH6E=: ]] 01:26:31.617 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDk2ZWRhMGY5MGQ4MmY1MTM4MzhkNzk3YjQzM2I1YzcxNDhjMjU0YzA1YzE3YWE2ZTQ2MDk4ZDFiMjMzNTEwNmjUH6E=: 01:26:31.617 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 01:26:31.617 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:26:31.618 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:26:31.618 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:26:31.618 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:26:31.618 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:26:31.618 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 01:26:31.618 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:31.618 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:31.618 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:31.618 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:26:31.618 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:26:31.618 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:31.618 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:31.618 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:31.618 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:31.618 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:31.618 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:31.618 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:31.618 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:31.618 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:31.618 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:26:31.618 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:31.618 05:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:31.618 nvme0n1 01:26:31.618 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:31.618 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:26:31.618 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:26:31.618 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:31.618 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:31.618 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:31.877 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:26:31.877 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:26:31.877 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:31.877 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:31.877 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:31.877 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:26:31.877 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 01:26:31.877 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:26:31.877 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:26:31.877 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:26:31.877 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:26:31.877 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGNmZDA2YTliNzIxYWZiMGIxOWI3MjcwNDIzNGQwYjA0OTllM2MxMWJlNDk1ODIw3m40Mg==: 01:26:31.877 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2UyMjIwODI3YjNmZGU5N2U5M2I0YTQxMDYzYWVkZmQ3ZGQxNWIzMGZkNDM1NTA3Mp/OWA==: 01:26:31.877 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:26:31.877 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:26:31.877 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGNmZDA2YTliNzIxYWZiMGIxOWI3MjcwNDIzNGQwYjA0OTllM2MxMWJlNDk1ODIw3m40Mg==: 01:26:31.877 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2UyMjIwODI3YjNmZGU5N2U5M2I0YTQxMDYzYWVkZmQ3ZGQxNWIzMGZkNDM1NTA3Mp/OWA==: ]] 01:26:31.877 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2UyMjIwODI3YjNmZGU5N2U5M2I0YTQxMDYzYWVkZmQ3ZGQxNWIzMGZkNDM1NTA3Mp/OWA==: 01:26:31.877 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 01:26:31.877 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:26:31.877 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:26:31.877 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:26:31.877 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:26:31.877 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:26:31.877 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 01:26:31.877 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:31.877 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:31.877 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:31.877 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:26:31.877 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:26:31.877 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:31.877 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:31.877 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:31.877 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:31.877 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:31.877 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:31.877 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:31.877 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:31.877 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:31.877 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:26:31.877 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:31.877 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:31.877 nvme0n1 01:26:31.877 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:31.877 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:26:31.877 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:31.877 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:26:31.877 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:31.877 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:31.877 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:26:31.877 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:26:31.877 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:31.877 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:31.877 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:31.877 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:26:31.877 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 01:26:31.877 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:26:31.877 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:26:31.877 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:26:31.877 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:26:31.877 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YThlMDI0MmFhMDAzZjc3ZGI5NjI4NGYxOTNhMzdmMmLmgnvi: 01:26:31.877 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGE4ODRiZDgwY2VlZjM4NDFhMDE2ZTYyMzkzMTYyYzakYML0: 01:26:31.877 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:26:31.877 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:26:31.877 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YThlMDI0MmFhMDAzZjc3ZGI5NjI4NGYxOTNhMzdmMmLmgnvi: 01:26:31.877 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGE4ODRiZDgwY2VlZjM4NDFhMDE2ZTYyMzkzMTYyYzakYML0: ]] 01:26:31.877 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGE4ODRiZDgwY2VlZjM4NDFhMDE2ZTYyMzkzMTYyYzakYML0: 01:26:31.877 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 01:26:31.877 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:26:31.877 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:26:31.878 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:26:31.878 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:26:31.878 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:26:31.878 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 01:26:31.878 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:31.878 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:31.878 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:31.878 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:26:31.878 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:26:31.878 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:31.878 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:31.878 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:31.878 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:31.878 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:31.878 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:31.878 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:31.878 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:31.878 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:31.878 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:26:31.878 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:31.878 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:32.148 nvme0n1 01:26:32.148 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:32.148 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:26:32.148 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:32.148 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:32.148 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:26:32.148 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:32.148 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:26:32.148 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:26:32.148 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:32.148 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:32.148 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:32.148 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:26:32.148 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 01:26:32.148 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:26:32.148 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:26:32.148 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:26:32.148 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:26:32.148 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjYwYjhhNzk3M2VjM2Q5ZWQ0YTRlMWY4MzlhN2UyZTNkMWYzMmFiYjI5YTc5MDdhoHn+8w==: 01:26:32.148 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTIzNDE3ODdjMDRkMzM2MGJkNmE1YTQwN2UzMGMzYmYMWQfu: 01:26:32.148 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:26:32.148 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:26:32.148 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjYwYjhhNzk3M2VjM2Q5ZWQ0YTRlMWY4MzlhN2UyZTNkMWYzMmFiYjI5YTc5MDdhoHn+8w==: 01:26:32.148 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTIzNDE3ODdjMDRkMzM2MGJkNmE1YTQwN2UzMGMzYmYMWQfu: ]] 01:26:32.148 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTIzNDE3ODdjMDRkMzM2MGJkNmE1YTQwN2UzMGMzYmYMWQfu: 01:26:32.148 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 01:26:32.148 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:26:32.148 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:26:32.148 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:26:32.148 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:26:32.148 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:26:32.148 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 01:26:32.148 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:32.148 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:32.148 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:32.148 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:26:32.148 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:26:32.148 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:32.148 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:32.148 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:32.148 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:32.148 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:32.148 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:32.148 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:32.148 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:32.148 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:32.148 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:26:32.148 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:32.148 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:32.430 nvme0n1 01:26:32.430 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:32.430 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:26:32.430 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:26:32.430 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:32.430 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:32.430 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:32.430 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:26:32.430 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:26:32.430 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:32.430 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:32.430 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:32.430 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:26:32.430 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 01:26:32.430 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:26:32.430 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:26:32.430 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:26:32.431 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:26:32.431 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmViMDJhYTE4YWVmMjNkYzgzMjhiOGNmNDJhNzM3ZGM3YTljOTc1Mzg1ZDc5MzcyMTdkMDUxMDdjN2YwOGM0MjfIqi4=: 01:26:32.431 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:26:32.431 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:26:32.431 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:26:32.431 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmViMDJhYTE4YWVmMjNkYzgzMjhiOGNmNDJhNzM3ZGM3YTljOTc1Mzg1ZDc5MzcyMTdkMDUxMDdjN2YwOGM0MjfIqi4=: 01:26:32.431 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:26:32.431 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 01:26:32.431 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:26:32.431 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:26:32.431 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:26:32.431 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:26:32.431 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:26:32.431 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 01:26:32.431 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:32.431 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:32.431 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:32.431 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:26:32.431 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:26:32.431 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:32.431 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:32.431 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:32.431 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:32.431 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:32.431 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:32.431 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:32.431 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:32.431 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:32.431 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:26:32.431 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:32.431 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:32.431 nvme0n1 01:26:32.431 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:32.431 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:26:32.431 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:32.431 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:26:32.431 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:32.431 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:32.697 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:26:32.697 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:26:32.697 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:32.697 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:32.697 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:32.697 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:26:32.697 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:26:32.697 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 01:26:32.697 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:26:32.697 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:26:32.697 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:26:32.697 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:26:32.697 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDg5OGQ1NDhjNTUyM2QzZGE0YzE2YzY0MGYwYmQ4NTiu9ens: 01:26:32.697 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDk2ZWRhMGY5MGQ4MmY1MTM4MzhkNzk3YjQzM2I1YzcxNDhjMjU0YzA1YzE3YWE2ZTQ2MDk4ZDFiMjMzNTEwNmjUH6E=: 01:26:32.697 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:26:32.697 05:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:26:32.966 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDg5OGQ1NDhjNTUyM2QzZGE0YzE2YzY0MGYwYmQ4NTiu9ens: 01:26:32.966 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDk2ZWRhMGY5MGQ4MmY1MTM4MzhkNzk3YjQzM2I1YzcxNDhjMjU0YzA1YzE3YWE2ZTQ2MDk4ZDFiMjMzNTEwNmjUH6E=: ]] 01:26:32.966 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDk2ZWRhMGY5MGQ4MmY1MTM4MzhkNzk3YjQzM2I1YzcxNDhjMjU0YzA1YzE3YWE2ZTQ2MDk4ZDFiMjMzNTEwNmjUH6E=: 01:26:32.966 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 01:26:32.966 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:26:32.966 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:26:32.966 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:26:32.966 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:26:32.966 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:26:32.966 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 01:26:32.966 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:32.966 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:32.966 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:32.966 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:26:32.967 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:26:32.967 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:32.967 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:32.967 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:32.967 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:32.967 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:32.967 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:32.967 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:32.967 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:32.967 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:32.967 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:26:32.967 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:32.967 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:33.226 nvme0n1 01:26:33.226 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:33.226 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:26:33.226 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:26:33.226 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:33.226 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:33.226 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:33.226 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:26:33.226 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:26:33.226 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:33.226 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:33.226 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:33.226 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:26:33.226 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 01:26:33.226 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:26:33.226 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:26:33.226 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:26:33.226 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:26:33.226 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGNmZDA2YTliNzIxYWZiMGIxOWI3MjcwNDIzNGQwYjA0OTllM2MxMWJlNDk1ODIw3m40Mg==: 01:26:33.226 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2UyMjIwODI3YjNmZGU5N2U5M2I0YTQxMDYzYWVkZmQ3ZGQxNWIzMGZkNDM1NTA3Mp/OWA==: 01:26:33.226 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:26:33.226 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:26:33.226 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGNmZDA2YTliNzIxYWZiMGIxOWI3MjcwNDIzNGQwYjA0OTllM2MxMWJlNDk1ODIw3m40Mg==: 01:26:33.226 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2UyMjIwODI3YjNmZGU5N2U5M2I0YTQxMDYzYWVkZmQ3ZGQxNWIzMGZkNDM1NTA3Mp/OWA==: ]] 01:26:33.226 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2UyMjIwODI3YjNmZGU5N2U5M2I0YTQxMDYzYWVkZmQ3ZGQxNWIzMGZkNDM1NTA3Mp/OWA==: 01:26:33.226 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 01:26:33.226 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:26:33.226 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:26:33.226 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:26:33.226 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:26:33.226 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:26:33.226 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 01:26:33.226 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:33.226 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:33.226 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:33.226 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:26:33.226 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:26:33.226 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:33.226 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:33.226 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:33.226 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:33.226 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:33.226 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:33.226 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:33.226 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:33.226 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:33.226 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:26:33.226 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:33.226 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:33.486 nvme0n1 01:26:33.486 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:33.486 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:26:33.486 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:33.486 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:26:33.486 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:33.486 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:33.486 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:26:33.486 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:26:33.486 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:33.486 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:33.486 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:33.486 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:26:33.486 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 01:26:33.486 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:26:33.486 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:26:33.486 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:26:33.486 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:26:33.486 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YThlMDI0MmFhMDAzZjc3ZGI5NjI4NGYxOTNhMzdmMmLmgnvi: 01:26:33.486 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGE4ODRiZDgwY2VlZjM4NDFhMDE2ZTYyMzkzMTYyYzakYML0: 01:26:33.486 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:26:33.486 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:26:33.486 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YThlMDI0MmFhMDAzZjc3ZGI5NjI4NGYxOTNhMzdmMmLmgnvi: 01:26:33.486 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGE4ODRiZDgwY2VlZjM4NDFhMDE2ZTYyMzkzMTYyYzakYML0: ]] 01:26:33.486 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGE4ODRiZDgwY2VlZjM4NDFhMDE2ZTYyMzkzMTYyYzakYML0: 01:26:33.486 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 01:26:33.486 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:26:33.486 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:26:33.486 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:26:33.486 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:26:33.486 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:26:33.486 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 01:26:33.486 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:33.486 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:33.486 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:33.486 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:26:33.486 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:26:33.486 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:33.486 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:33.486 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:33.486 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:33.486 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:33.486 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:33.486 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:33.486 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:33.486 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:33.487 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:26:33.487 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:33.487 05:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:33.746 nvme0n1 01:26:33.746 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:33.746 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:26:33.746 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:26:33.746 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:33.746 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:33.746 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:33.746 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:26:33.746 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:26:33.746 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:33.746 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:33.746 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:33.746 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:26:33.746 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 01:26:33.746 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:26:33.746 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:26:33.746 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:26:33.746 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:26:33.746 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjYwYjhhNzk3M2VjM2Q5ZWQ0YTRlMWY4MzlhN2UyZTNkMWYzMmFiYjI5YTc5MDdhoHn+8w==: 01:26:33.746 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTIzNDE3ODdjMDRkMzM2MGJkNmE1YTQwN2UzMGMzYmYMWQfu: 01:26:33.746 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:26:33.746 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:26:33.746 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjYwYjhhNzk3M2VjM2Q5ZWQ0YTRlMWY4MzlhN2UyZTNkMWYzMmFiYjI5YTc5MDdhoHn+8w==: 01:26:33.746 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTIzNDE3ODdjMDRkMzM2MGJkNmE1YTQwN2UzMGMzYmYMWQfu: ]] 01:26:33.746 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTIzNDE3ODdjMDRkMzM2MGJkNmE1YTQwN2UzMGMzYmYMWQfu: 01:26:33.746 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 01:26:33.746 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:26:33.746 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:26:33.746 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:26:33.746 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:26:33.746 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:26:33.746 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 01:26:33.746 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:33.746 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:33.746 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:33.746 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:26:33.746 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:26:33.746 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:33.746 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:33.746 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:33.746 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:33.746 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:33.746 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:33.746 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:33.746 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:33.746 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:33.747 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:26:33.747 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:33.747 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:34.005 nvme0n1 01:26:34.005 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:34.006 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:26:34.006 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:26:34.006 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:34.006 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:34.006 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:34.006 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:26:34.006 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:26:34.006 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:34.006 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:34.006 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:34.006 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:26:34.006 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 01:26:34.006 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:26:34.006 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:26:34.006 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:26:34.006 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:26:34.006 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmViMDJhYTE4YWVmMjNkYzgzMjhiOGNmNDJhNzM3ZGM3YTljOTc1Mzg1ZDc5MzcyMTdkMDUxMDdjN2YwOGM0MjfIqi4=: 01:26:34.006 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:26:34.006 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:26:34.006 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:26:34.006 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmViMDJhYTE4YWVmMjNkYzgzMjhiOGNmNDJhNzM3ZGM3YTljOTc1Mzg1ZDc5MzcyMTdkMDUxMDdjN2YwOGM0MjfIqi4=: 01:26:34.006 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:26:34.006 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 01:26:34.006 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:26:34.006 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:26:34.006 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:26:34.006 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:26:34.006 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:26:34.006 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 01:26:34.006 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:34.006 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:34.006 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:34.006 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:26:34.006 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:26:34.006 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:34.006 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:34.006 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:34.006 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:34.006 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:34.006 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:34.006 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:34.006 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:34.006 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:34.006 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:26:34.006 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:34.006 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:34.264 nvme0n1 01:26:34.264 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:34.264 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:26:34.264 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:26:34.264 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:34.265 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:34.265 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:34.265 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:26:34.265 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:26:34.265 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:34.265 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:34.265 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:34.265 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:26:34.265 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:26:34.265 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 01:26:34.265 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:26:34.265 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:26:34.265 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:26:34.265 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:26:34.265 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDg5OGQ1NDhjNTUyM2QzZGE0YzE2YzY0MGYwYmQ4NTiu9ens: 01:26:34.265 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDk2ZWRhMGY5MGQ4MmY1MTM4MzhkNzk3YjQzM2I1YzcxNDhjMjU0YzA1YzE3YWE2ZTQ2MDk4ZDFiMjMzNTEwNmjUH6E=: 01:26:34.265 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:26:34.265 05:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:26:35.644 05:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDg5OGQ1NDhjNTUyM2QzZGE0YzE2YzY0MGYwYmQ4NTiu9ens: 01:26:35.644 05:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDk2ZWRhMGY5MGQ4MmY1MTM4MzhkNzk3YjQzM2I1YzcxNDhjMjU0YzA1YzE3YWE2ZTQ2MDk4ZDFiMjMzNTEwNmjUH6E=: ]] 01:26:35.644 05:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDk2ZWRhMGY5MGQ4MmY1MTM4MzhkNzk3YjQzM2I1YzcxNDhjMjU0YzA1YzE3YWE2ZTQ2MDk4ZDFiMjMzNTEwNmjUH6E=: 01:26:35.644 05:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 01:26:35.644 05:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:26:35.644 05:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:26:35.644 05:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:26:35.644 05:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:26:35.644 05:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:26:35.644 05:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 01:26:35.644 05:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:35.644 05:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:35.644 05:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:35.644 05:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:26:35.644 05:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:26:35.644 05:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:35.644 05:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:35.644 05:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:35.644 05:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:35.644 05:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:35.644 05:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:35.644 05:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:35.644 05:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:35.644 05:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:35.644 05:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:26:35.644 05:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:35.644 05:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:35.904 nvme0n1 01:26:35.904 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:35.904 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:26:35.904 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:26:35.904 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:35.904 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:35.904 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:35.904 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:26:35.904 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:26:35.904 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:35.904 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:35.904 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:35.904 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:26:35.904 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 01:26:35.904 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:26:35.904 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:26:35.904 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:26:35.904 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:26:35.904 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGNmZDA2YTliNzIxYWZiMGIxOWI3MjcwNDIzNGQwYjA0OTllM2MxMWJlNDk1ODIw3m40Mg==: 01:26:35.904 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2UyMjIwODI3YjNmZGU5N2U5M2I0YTQxMDYzYWVkZmQ3ZGQxNWIzMGZkNDM1NTA3Mp/OWA==: 01:26:35.904 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:26:35.904 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:26:35.904 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGNmZDA2YTliNzIxYWZiMGIxOWI3MjcwNDIzNGQwYjA0OTllM2MxMWJlNDk1ODIw3m40Mg==: 01:26:35.904 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2UyMjIwODI3YjNmZGU5N2U5M2I0YTQxMDYzYWVkZmQ3ZGQxNWIzMGZkNDM1NTA3Mp/OWA==: ]] 01:26:35.904 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2UyMjIwODI3YjNmZGU5N2U5M2I0YTQxMDYzYWVkZmQ3ZGQxNWIzMGZkNDM1NTA3Mp/OWA==: 01:26:35.904 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 01:26:35.904 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:26:35.904 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:26:35.904 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:26:35.904 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:26:35.904 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:26:35.904 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 01:26:35.904 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:35.904 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:35.904 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:35.904 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:26:35.904 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:26:35.904 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:35.904 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:35.904 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:35.904 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:35.904 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:35.904 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:35.904 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:35.904 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:35.904 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:35.904 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:26:35.904 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:35.904 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:36.162 nvme0n1 01:26:36.163 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:36.163 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:26:36.163 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:36.163 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:26:36.163 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:36.163 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:36.421 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:26:36.421 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:26:36.421 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:36.421 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:36.421 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:36.421 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:26:36.421 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 01:26:36.421 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:26:36.421 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:26:36.421 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:26:36.421 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:26:36.421 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YThlMDI0MmFhMDAzZjc3ZGI5NjI4NGYxOTNhMzdmMmLmgnvi: 01:26:36.421 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGE4ODRiZDgwY2VlZjM4NDFhMDE2ZTYyMzkzMTYyYzakYML0: 01:26:36.421 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:26:36.421 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:26:36.421 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YThlMDI0MmFhMDAzZjc3ZGI5NjI4NGYxOTNhMzdmMmLmgnvi: 01:26:36.421 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGE4ODRiZDgwY2VlZjM4NDFhMDE2ZTYyMzkzMTYyYzakYML0: ]] 01:26:36.421 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGE4ODRiZDgwY2VlZjM4NDFhMDE2ZTYyMzkzMTYyYzakYML0: 01:26:36.421 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 01:26:36.421 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:26:36.421 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:26:36.421 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:26:36.421 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:26:36.421 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:26:36.421 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 01:26:36.421 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:36.421 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:36.421 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:36.421 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:26:36.421 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:26:36.421 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:36.421 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:36.421 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:36.421 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:36.421 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:36.421 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:36.421 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:36.421 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:36.421 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:36.421 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:26:36.421 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:36.421 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:36.681 nvme0n1 01:26:36.681 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:36.681 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:26:36.681 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:26:36.681 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:36.681 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:36.681 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:36.681 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:26:36.681 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:26:36.681 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:36.681 05:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:36.681 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:36.681 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:26:36.681 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 01:26:36.681 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:26:36.681 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:26:36.681 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:26:36.681 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:26:36.681 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjYwYjhhNzk3M2VjM2Q5ZWQ0YTRlMWY4MzlhN2UyZTNkMWYzMmFiYjI5YTc5MDdhoHn+8w==: 01:26:36.681 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTIzNDE3ODdjMDRkMzM2MGJkNmE1YTQwN2UzMGMzYmYMWQfu: 01:26:36.681 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:26:36.681 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:26:36.681 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjYwYjhhNzk3M2VjM2Q5ZWQ0YTRlMWY4MzlhN2UyZTNkMWYzMmFiYjI5YTc5MDdhoHn+8w==: 01:26:36.681 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTIzNDE3ODdjMDRkMzM2MGJkNmE1YTQwN2UzMGMzYmYMWQfu: ]] 01:26:36.681 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTIzNDE3ODdjMDRkMzM2MGJkNmE1YTQwN2UzMGMzYmYMWQfu: 01:26:36.681 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 01:26:36.681 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:26:36.681 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:26:36.681 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:26:36.681 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:26:36.681 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:26:36.681 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 01:26:36.681 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:36.681 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:36.681 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:36.681 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:26:36.681 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:26:36.681 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:36.681 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:36.681 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:36.681 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:36.681 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:36.681 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:36.681 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:36.681 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:36.681 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:36.681 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:26:36.681 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:36.681 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:36.940 nvme0n1 01:26:36.940 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:36.940 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:26:36.940 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:26:36.940 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:36.940 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:36.940 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:36.940 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:26:36.940 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:26:36.940 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:36.940 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:37.199 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:37.199 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:26:37.199 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 01:26:37.199 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:26:37.199 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:26:37.199 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:26:37.199 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:26:37.199 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmViMDJhYTE4YWVmMjNkYzgzMjhiOGNmNDJhNzM3ZGM3YTljOTc1Mzg1ZDc5MzcyMTdkMDUxMDdjN2YwOGM0MjfIqi4=: 01:26:37.199 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:26:37.199 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:26:37.199 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:26:37.199 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmViMDJhYTE4YWVmMjNkYzgzMjhiOGNmNDJhNzM3ZGM3YTljOTc1Mzg1ZDc5MzcyMTdkMDUxMDdjN2YwOGM0MjfIqi4=: 01:26:37.199 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:26:37.199 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 01:26:37.199 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:26:37.199 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:26:37.199 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:26:37.199 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:26:37.199 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:26:37.199 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 01:26:37.199 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:37.199 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:37.199 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:37.199 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:26:37.199 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:26:37.199 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:37.199 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:37.199 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:37.199 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:37.199 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:37.199 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:37.199 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:37.199 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:37.199 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:37.199 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:26:37.199 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:37.199 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:37.459 nvme0n1 01:26:37.459 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:37.459 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:26:37.459 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:26:37.459 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:37.459 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:37.459 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:37.459 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:26:37.459 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:26:37.459 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:37.460 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:37.460 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:37.460 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:26:37.460 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:26:37.460 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 01:26:37.460 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:26:37.460 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:26:37.460 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:26:37.460 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:26:37.460 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDg5OGQ1NDhjNTUyM2QzZGE0YzE2YzY0MGYwYmQ4NTiu9ens: 01:26:37.460 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDk2ZWRhMGY5MGQ4MmY1MTM4MzhkNzk3YjQzM2I1YzcxNDhjMjU0YzA1YzE3YWE2ZTQ2MDk4ZDFiMjMzNTEwNmjUH6E=: 01:26:37.460 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:26:37.460 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:26:37.460 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDg5OGQ1NDhjNTUyM2QzZGE0YzE2YzY0MGYwYmQ4NTiu9ens: 01:26:37.460 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDk2ZWRhMGY5MGQ4MmY1MTM4MzhkNzk3YjQzM2I1YzcxNDhjMjU0YzA1YzE3YWE2ZTQ2MDk4ZDFiMjMzNTEwNmjUH6E=: ]] 01:26:37.460 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDk2ZWRhMGY5MGQ4MmY1MTM4MzhkNzk3YjQzM2I1YzcxNDhjMjU0YzA1YzE3YWE2ZTQ2MDk4ZDFiMjMzNTEwNmjUH6E=: 01:26:37.460 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 01:26:37.460 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:26:37.460 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:26:37.460 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:26:37.460 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:26:37.460 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:26:37.460 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 01:26:37.460 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:37.460 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:37.460 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:37.460 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:26:37.460 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:26:37.460 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:37.460 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:37.460 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:37.460 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:37.460 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:37.460 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:37.460 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:37.460 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:37.460 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:37.460 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:26:37.460 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:37.460 05:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:38.028 nvme0n1 01:26:38.028 05:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:38.028 05:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:26:38.028 05:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:26:38.028 05:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:38.028 05:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:38.028 05:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:38.028 05:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:26:38.028 05:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:26:38.028 05:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:38.028 05:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:38.028 05:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:38.028 05:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:26:38.028 05:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 01:26:38.028 05:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:26:38.028 05:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:26:38.028 05:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:26:38.028 05:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:26:38.028 05:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGNmZDA2YTliNzIxYWZiMGIxOWI3MjcwNDIzNGQwYjA0OTllM2MxMWJlNDk1ODIw3m40Mg==: 01:26:38.028 05:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2UyMjIwODI3YjNmZGU5N2U5M2I0YTQxMDYzYWVkZmQ3ZGQxNWIzMGZkNDM1NTA3Mp/OWA==: 01:26:38.028 05:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:26:38.028 05:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:26:38.028 05:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGNmZDA2YTliNzIxYWZiMGIxOWI3MjcwNDIzNGQwYjA0OTllM2MxMWJlNDk1ODIw3m40Mg==: 01:26:38.028 05:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2UyMjIwODI3YjNmZGU5N2U5M2I0YTQxMDYzYWVkZmQ3ZGQxNWIzMGZkNDM1NTA3Mp/OWA==: ]] 01:26:38.028 05:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2UyMjIwODI3YjNmZGU5N2U5M2I0YTQxMDYzYWVkZmQ3ZGQxNWIzMGZkNDM1NTA3Mp/OWA==: 01:26:38.028 05:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 01:26:38.028 05:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:26:38.028 05:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:26:38.028 05:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:26:38.028 05:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:26:38.028 05:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:26:38.028 05:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 01:26:38.028 05:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:38.028 05:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:38.028 05:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:38.028 05:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:26:38.028 05:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:26:38.028 05:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:38.028 05:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:38.028 05:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:38.028 05:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:38.028 05:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:38.028 05:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:38.028 05:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:38.028 05:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:38.028 05:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:38.028 05:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:26:38.028 05:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:38.028 05:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:38.596 nvme0n1 01:26:38.596 05:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:38.596 05:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:26:38.596 05:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:26:38.596 05:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:38.596 05:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:38.596 05:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:38.596 05:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:26:38.596 05:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:26:38.596 05:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:38.596 05:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:38.596 05:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:38.596 05:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:26:38.596 05:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 01:26:38.596 05:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:26:38.596 05:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:26:38.596 05:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:26:38.596 05:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:26:38.596 05:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YThlMDI0MmFhMDAzZjc3ZGI5NjI4NGYxOTNhMzdmMmLmgnvi: 01:26:38.596 05:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGE4ODRiZDgwY2VlZjM4NDFhMDE2ZTYyMzkzMTYyYzakYML0: 01:26:38.596 05:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:26:38.596 05:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:26:38.596 05:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YThlMDI0MmFhMDAzZjc3ZGI5NjI4NGYxOTNhMzdmMmLmgnvi: 01:26:38.596 05:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGE4ODRiZDgwY2VlZjM4NDFhMDE2ZTYyMzkzMTYyYzakYML0: ]] 01:26:38.596 05:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGE4ODRiZDgwY2VlZjM4NDFhMDE2ZTYyMzkzMTYyYzakYML0: 01:26:38.596 05:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 01:26:38.596 05:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:26:38.596 05:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:26:38.596 05:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:26:38.596 05:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:26:38.596 05:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:26:38.596 05:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 01:26:38.596 05:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:38.596 05:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:38.596 05:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:38.596 05:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:26:38.596 05:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:26:38.596 05:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:38.596 05:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:38.596 05:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:38.596 05:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:38.596 05:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:38.596 05:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:38.596 05:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:38.596 05:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:38.596 05:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:38.596 05:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:26:38.596 05:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:38.596 05:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:39.165 nvme0n1 01:26:39.165 05:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:39.165 05:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:26:39.165 05:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:26:39.165 05:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:39.165 05:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:39.165 05:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:39.165 05:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:26:39.165 05:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:26:39.165 05:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:39.165 05:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:39.165 05:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:39.165 05:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:26:39.165 05:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 01:26:39.165 05:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:26:39.165 05:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:26:39.165 05:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:26:39.165 05:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:26:39.165 05:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjYwYjhhNzk3M2VjM2Q5ZWQ0YTRlMWY4MzlhN2UyZTNkMWYzMmFiYjI5YTc5MDdhoHn+8w==: 01:26:39.165 05:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTIzNDE3ODdjMDRkMzM2MGJkNmE1YTQwN2UzMGMzYmYMWQfu: 01:26:39.165 05:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:26:39.165 05:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:26:39.165 05:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjYwYjhhNzk3M2VjM2Q5ZWQ0YTRlMWY4MzlhN2UyZTNkMWYzMmFiYjI5YTc5MDdhoHn+8w==: 01:26:39.165 05:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTIzNDE3ODdjMDRkMzM2MGJkNmE1YTQwN2UzMGMzYmYMWQfu: ]] 01:26:39.165 05:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTIzNDE3ODdjMDRkMzM2MGJkNmE1YTQwN2UzMGMzYmYMWQfu: 01:26:39.165 05:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 01:26:39.165 05:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:26:39.165 05:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:26:39.165 05:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:26:39.165 05:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:26:39.165 05:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:26:39.165 05:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 01:26:39.165 05:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:39.165 05:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:39.165 05:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:39.165 05:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:26:39.165 05:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:26:39.165 05:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:39.165 05:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:39.165 05:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:39.165 05:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:39.165 05:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:39.165 05:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:39.165 05:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:39.165 05:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:39.165 05:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:39.165 05:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:26:39.165 05:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:39.165 05:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:39.786 nvme0n1 01:26:39.786 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:39.786 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:26:39.786 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:26:39.786 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:39.786 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:39.786 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:39.786 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:26:39.786 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:26:39.786 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:39.786 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:39.786 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:39.786 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:26:39.786 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 01:26:39.786 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:26:39.786 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:26:39.786 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:26:39.786 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:26:39.786 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmViMDJhYTE4YWVmMjNkYzgzMjhiOGNmNDJhNzM3ZGM3YTljOTc1Mzg1ZDc5MzcyMTdkMDUxMDdjN2YwOGM0MjfIqi4=: 01:26:39.786 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:26:39.786 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:26:39.786 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:26:39.786 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmViMDJhYTE4YWVmMjNkYzgzMjhiOGNmNDJhNzM3ZGM3YTljOTc1Mzg1ZDc5MzcyMTdkMDUxMDdjN2YwOGM0MjfIqi4=: 01:26:39.786 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:26:39.786 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 01:26:39.786 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:26:39.786 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:26:39.786 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:26:39.786 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:26:39.786 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:26:39.786 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 01:26:39.786 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:39.786 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:39.786 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:39.786 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:26:39.786 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:26:39.786 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:39.786 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:39.786 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:39.786 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:39.786 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:39.786 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:39.786 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:39.786 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:39.786 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:39.786 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:26:39.786 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:39.786 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:40.356 nvme0n1 01:26:40.356 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:40.356 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:26:40.356 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:40.356 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:26:40.356 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:40.356 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:40.356 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:26:40.356 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:26:40.356 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:40.356 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:40.356 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:40.356 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 01:26:40.356 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:26:40.356 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:26:40.356 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 01:26:40.356 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:26:40.356 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:26:40.356 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:26:40.356 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:26:40.356 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDg5OGQ1NDhjNTUyM2QzZGE0YzE2YzY0MGYwYmQ4NTiu9ens: 01:26:40.356 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDk2ZWRhMGY5MGQ4MmY1MTM4MzhkNzk3YjQzM2I1YzcxNDhjMjU0YzA1YzE3YWE2ZTQ2MDk4ZDFiMjMzNTEwNmjUH6E=: 01:26:40.356 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:26:40.356 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:26:40.356 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDg5OGQ1NDhjNTUyM2QzZGE0YzE2YzY0MGYwYmQ4NTiu9ens: 01:26:40.356 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDk2ZWRhMGY5MGQ4MmY1MTM4MzhkNzk3YjQzM2I1YzcxNDhjMjU0YzA1YzE3YWE2ZTQ2MDk4ZDFiMjMzNTEwNmjUH6E=: ]] 01:26:40.356 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDk2ZWRhMGY5MGQ4MmY1MTM4MzhkNzk3YjQzM2I1YzcxNDhjMjU0YzA1YzE3YWE2ZTQ2MDk4ZDFiMjMzNTEwNmjUH6E=: 01:26:40.356 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 01:26:40.356 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:26:40.356 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:26:40.356 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:26:40.356 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:26:40.356 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:26:40.356 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 01:26:40.356 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:40.356 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:40.356 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:40.356 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:26:40.356 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:26:40.356 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:40.356 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:40.356 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:40.356 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:40.356 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:40.356 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:40.356 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:40.356 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:40.356 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:40.356 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:26:40.356 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:40.356 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:40.669 nvme0n1 01:26:40.669 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:40.669 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:26:40.669 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:26:40.669 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:40.669 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:40.669 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:40.669 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:26:40.669 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:26:40.669 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:40.669 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:40.669 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:40.669 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:26:40.669 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 01:26:40.669 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:26:40.669 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:26:40.669 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:26:40.669 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:26:40.669 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGNmZDA2YTliNzIxYWZiMGIxOWI3MjcwNDIzNGQwYjA0OTllM2MxMWJlNDk1ODIw3m40Mg==: 01:26:40.669 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2UyMjIwODI3YjNmZGU5N2U5M2I0YTQxMDYzYWVkZmQ3ZGQxNWIzMGZkNDM1NTA3Mp/OWA==: 01:26:40.669 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:26:40.669 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:26:40.669 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGNmZDA2YTliNzIxYWZiMGIxOWI3MjcwNDIzNGQwYjA0OTllM2MxMWJlNDk1ODIw3m40Mg==: 01:26:40.669 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2UyMjIwODI3YjNmZGU5N2U5M2I0YTQxMDYzYWVkZmQ3ZGQxNWIzMGZkNDM1NTA3Mp/OWA==: ]] 01:26:40.669 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2UyMjIwODI3YjNmZGU5N2U5M2I0YTQxMDYzYWVkZmQ3ZGQxNWIzMGZkNDM1NTA3Mp/OWA==: 01:26:40.669 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 01:26:40.669 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:26:40.669 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:26:40.669 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:26:40.669 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:26:40.669 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:26:40.669 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 01:26:40.669 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:40.669 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:40.669 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:40.669 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:26:40.669 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:26:40.669 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:40.669 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:40.669 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:40.669 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:40.669 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:40.669 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:40.669 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:40.669 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:40.669 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:40.669 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:26:40.669 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:40.669 05:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:40.669 nvme0n1 01:26:40.669 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:40.669 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:26:40.669 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:26:40.669 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:40.669 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:40.669 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:40.669 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:26:40.669 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:26:40.669 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:40.669 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:40.669 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:40.669 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:26:40.669 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 01:26:40.669 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:26:40.669 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:26:40.669 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:26:40.669 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:26:40.669 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YThlMDI0MmFhMDAzZjc3ZGI5NjI4NGYxOTNhMzdmMmLmgnvi: 01:26:40.669 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGE4ODRiZDgwY2VlZjM4NDFhMDE2ZTYyMzkzMTYyYzakYML0: 01:26:40.669 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:26:40.669 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:26:40.669 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YThlMDI0MmFhMDAzZjc3ZGI5NjI4NGYxOTNhMzdmMmLmgnvi: 01:26:40.669 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGE4ODRiZDgwY2VlZjM4NDFhMDE2ZTYyMzkzMTYyYzakYML0: ]] 01:26:40.669 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGE4ODRiZDgwY2VlZjM4NDFhMDE2ZTYyMzkzMTYyYzakYML0: 01:26:40.669 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 01:26:40.669 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:26:40.669 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:26:40.669 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:26:40.669 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:26:40.670 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:26:40.670 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 01:26:40.670 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:40.670 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:40.943 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:40.943 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:26:40.943 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:26:40.943 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:40.943 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:40.943 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:40.943 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:40.943 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:40.943 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:40.943 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:40.943 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:40.943 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:40.943 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:26:40.943 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:40.943 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:40.943 nvme0n1 01:26:40.943 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:40.943 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:26:40.943 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:26:40.943 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:40.943 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:40.943 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:40.943 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:26:40.943 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:26:40.944 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:40.944 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:40.944 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:40.944 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:26:40.944 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 01:26:40.944 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:26:40.944 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:26:40.944 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:26:40.944 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:26:40.944 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjYwYjhhNzk3M2VjM2Q5ZWQ0YTRlMWY4MzlhN2UyZTNkMWYzMmFiYjI5YTc5MDdhoHn+8w==: 01:26:40.944 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTIzNDE3ODdjMDRkMzM2MGJkNmE1YTQwN2UzMGMzYmYMWQfu: 01:26:40.944 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:26:40.944 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:26:40.944 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjYwYjhhNzk3M2VjM2Q5ZWQ0YTRlMWY4MzlhN2UyZTNkMWYzMmFiYjI5YTc5MDdhoHn+8w==: 01:26:40.944 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTIzNDE3ODdjMDRkMzM2MGJkNmE1YTQwN2UzMGMzYmYMWQfu: ]] 01:26:40.944 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTIzNDE3ODdjMDRkMzM2MGJkNmE1YTQwN2UzMGMzYmYMWQfu: 01:26:40.944 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 01:26:40.944 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:26:40.944 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:26:40.944 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:26:40.944 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:26:40.944 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:26:40.944 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 01:26:40.944 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:40.944 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:40.944 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:40.944 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:26:40.944 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:26:40.944 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:40.944 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:40.944 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:40.944 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:40.944 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:40.944 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:40.944 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:40.944 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:40.944 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:40.944 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:26:40.944 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:40.944 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:40.944 nvme0n1 01:26:40.944 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:40.944 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:26:40.944 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:26:40.944 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:40.944 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:40.944 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:41.203 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:26:41.203 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:26:41.203 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:41.203 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:41.203 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:41.203 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:26:41.203 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 01:26:41.203 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:26:41.203 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:26:41.204 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:26:41.204 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:26:41.204 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmViMDJhYTE4YWVmMjNkYzgzMjhiOGNmNDJhNzM3ZGM3YTljOTc1Mzg1ZDc5MzcyMTdkMDUxMDdjN2YwOGM0MjfIqi4=: 01:26:41.204 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:26:41.204 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:26:41.204 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:26:41.204 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmViMDJhYTE4YWVmMjNkYzgzMjhiOGNmNDJhNzM3ZGM3YTljOTc1Mzg1ZDc5MzcyMTdkMDUxMDdjN2YwOGM0MjfIqi4=: 01:26:41.204 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:26:41.204 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 01:26:41.204 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:26:41.204 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:26:41.204 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:26:41.204 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:26:41.204 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:26:41.204 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 01:26:41.204 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:41.204 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:41.204 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:41.204 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:26:41.204 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:26:41.204 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:41.204 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:41.204 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:41.204 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:41.204 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:41.204 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:41.204 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:41.204 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:41.204 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:41.204 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:26:41.204 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:41.204 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:41.204 nvme0n1 01:26:41.204 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:41.204 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:26:41.204 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:26:41.204 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:41.204 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:41.204 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:41.204 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:26:41.204 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:26:41.204 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:41.204 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:41.204 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:41.204 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:26:41.204 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:26:41.204 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 01:26:41.204 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:26:41.204 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:26:41.204 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:26:41.204 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:26:41.204 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDg5OGQ1NDhjNTUyM2QzZGE0YzE2YzY0MGYwYmQ4NTiu9ens: 01:26:41.204 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDk2ZWRhMGY5MGQ4MmY1MTM4MzhkNzk3YjQzM2I1YzcxNDhjMjU0YzA1YzE3YWE2ZTQ2MDk4ZDFiMjMzNTEwNmjUH6E=: 01:26:41.204 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:26:41.204 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:26:41.204 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDg5OGQ1NDhjNTUyM2QzZGE0YzE2YzY0MGYwYmQ4NTiu9ens: 01:26:41.204 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDk2ZWRhMGY5MGQ4MmY1MTM4MzhkNzk3YjQzM2I1YzcxNDhjMjU0YzA1YzE3YWE2ZTQ2MDk4ZDFiMjMzNTEwNmjUH6E=: ]] 01:26:41.204 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDk2ZWRhMGY5MGQ4MmY1MTM4MzhkNzk3YjQzM2I1YzcxNDhjMjU0YzA1YzE3YWE2ZTQ2MDk4ZDFiMjMzNTEwNmjUH6E=: 01:26:41.204 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 01:26:41.204 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:26:41.204 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:26:41.204 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:26:41.204 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:26:41.204 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:26:41.204 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 01:26:41.204 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:41.204 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:41.204 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:41.204 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:26:41.204 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:26:41.204 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:41.204 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:41.204 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:41.204 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:41.204 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:41.204 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:41.204 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:41.204 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:41.204 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:41.204 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:26:41.204 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:41.204 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:41.463 nvme0n1 01:26:41.463 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:41.463 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:26:41.463 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:41.463 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:26:41.463 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:41.463 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:41.463 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:26:41.463 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:26:41.463 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:41.463 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:41.463 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:41.463 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:26:41.463 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 01:26:41.463 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:26:41.463 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:26:41.463 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:26:41.463 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:26:41.463 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGNmZDA2YTliNzIxYWZiMGIxOWI3MjcwNDIzNGQwYjA0OTllM2MxMWJlNDk1ODIw3m40Mg==: 01:26:41.463 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2UyMjIwODI3YjNmZGU5N2U5M2I0YTQxMDYzYWVkZmQ3ZGQxNWIzMGZkNDM1NTA3Mp/OWA==: 01:26:41.463 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:26:41.463 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:26:41.463 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGNmZDA2YTliNzIxYWZiMGIxOWI3MjcwNDIzNGQwYjA0OTllM2MxMWJlNDk1ODIw3m40Mg==: 01:26:41.463 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2UyMjIwODI3YjNmZGU5N2U5M2I0YTQxMDYzYWVkZmQ3ZGQxNWIzMGZkNDM1NTA3Mp/OWA==: ]] 01:26:41.463 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2UyMjIwODI3YjNmZGU5N2U5M2I0YTQxMDYzYWVkZmQ3ZGQxNWIzMGZkNDM1NTA3Mp/OWA==: 01:26:41.463 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 01:26:41.463 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:26:41.463 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:26:41.463 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:26:41.463 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:26:41.463 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:26:41.463 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 01:26:41.463 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:41.463 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:41.463 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:41.463 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:26:41.463 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:26:41.463 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:41.463 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:41.463 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:41.463 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:41.463 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:41.463 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:41.463 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:41.463 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:41.463 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:41.463 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:26:41.463 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:41.463 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:41.723 nvme0n1 01:26:41.723 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:41.723 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:26:41.723 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:26:41.723 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:41.723 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:41.723 05:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:41.723 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:26:41.723 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:26:41.723 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:41.723 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:41.723 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:41.723 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:26:41.723 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 01:26:41.723 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:26:41.723 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:26:41.723 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:26:41.723 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:26:41.723 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YThlMDI0MmFhMDAzZjc3ZGI5NjI4NGYxOTNhMzdmMmLmgnvi: 01:26:41.723 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGE4ODRiZDgwY2VlZjM4NDFhMDE2ZTYyMzkzMTYyYzakYML0: 01:26:41.723 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:26:41.723 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:26:41.723 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YThlMDI0MmFhMDAzZjc3ZGI5NjI4NGYxOTNhMzdmMmLmgnvi: 01:26:41.723 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGE4ODRiZDgwY2VlZjM4NDFhMDE2ZTYyMzkzMTYyYzakYML0: ]] 01:26:41.723 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGE4ODRiZDgwY2VlZjM4NDFhMDE2ZTYyMzkzMTYyYzakYML0: 01:26:41.723 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 01:26:41.723 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:26:41.723 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:26:41.723 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:26:41.723 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:26:41.723 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:26:41.723 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 01:26:41.723 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:41.723 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:41.723 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:41.723 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:26:41.723 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:26:41.723 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:41.723 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:41.723 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:41.723 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:41.723 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:41.723 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:41.723 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:41.723 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:41.723 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:41.723 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:26:41.723 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:41.723 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:41.723 nvme0n1 01:26:41.723 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:41.723 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:26:41.723 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:26:41.723 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:41.723 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:41.994 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:41.994 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:26:41.994 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:26:41.994 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:41.994 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:41.994 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:41.994 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:26:41.994 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 01:26:41.994 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:26:41.994 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:26:41.994 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:26:41.994 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:26:41.994 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjYwYjhhNzk3M2VjM2Q5ZWQ0YTRlMWY4MzlhN2UyZTNkMWYzMmFiYjI5YTc5MDdhoHn+8w==: 01:26:41.994 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTIzNDE3ODdjMDRkMzM2MGJkNmE1YTQwN2UzMGMzYmYMWQfu: 01:26:41.994 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:26:41.994 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:26:41.994 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjYwYjhhNzk3M2VjM2Q5ZWQ0YTRlMWY4MzlhN2UyZTNkMWYzMmFiYjI5YTc5MDdhoHn+8w==: 01:26:41.994 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTIzNDE3ODdjMDRkMzM2MGJkNmE1YTQwN2UzMGMzYmYMWQfu: ]] 01:26:41.994 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTIzNDE3ODdjMDRkMzM2MGJkNmE1YTQwN2UzMGMzYmYMWQfu: 01:26:41.994 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 01:26:41.994 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:26:41.994 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:26:41.994 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:26:41.994 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:26:41.994 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:26:41.994 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 01:26:41.994 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:41.994 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:41.994 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:41.994 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:26:41.994 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:26:41.994 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:41.994 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:41.994 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:41.994 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:41.994 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:41.994 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:41.994 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:41.994 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:41.994 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:41.994 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:26:41.994 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:41.994 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:41.994 nvme0n1 01:26:41.994 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:41.994 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:26:41.994 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:41.994 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:26:41.994 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:41.994 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:41.994 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:26:41.994 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:26:41.994 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:41.994 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:41.994 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:41.994 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:26:41.994 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 01:26:41.994 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:26:41.994 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:26:41.994 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:26:41.994 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:26:41.995 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmViMDJhYTE4YWVmMjNkYzgzMjhiOGNmNDJhNzM3ZGM3YTljOTc1Mzg1ZDc5MzcyMTdkMDUxMDdjN2YwOGM0MjfIqi4=: 01:26:41.995 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:26:41.995 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:26:41.995 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:26:41.995 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmViMDJhYTE4YWVmMjNkYzgzMjhiOGNmNDJhNzM3ZGM3YTljOTc1Mzg1ZDc5MzcyMTdkMDUxMDdjN2YwOGM0MjfIqi4=: 01:26:41.995 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:26:41.995 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 01:26:41.995 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:26:41.995 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:26:41.995 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:26:41.995 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:26:41.995 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:26:41.995 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 01:26:41.995 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:41.995 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:41.995 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:41.995 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:26:41.995 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:26:41.995 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:41.995 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:42.255 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:42.255 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:42.255 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:42.255 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:42.255 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:42.255 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:42.255 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:42.255 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:26:42.255 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:42.255 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:42.255 nvme0n1 01:26:42.255 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:42.255 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:26:42.255 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:26:42.255 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:42.255 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:42.255 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:42.255 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:26:42.255 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:26:42.255 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:42.255 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:42.255 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:42.255 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:26:42.255 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:26:42.255 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 01:26:42.255 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:26:42.255 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:26:42.255 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:26:42.255 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:26:42.255 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDg5OGQ1NDhjNTUyM2QzZGE0YzE2YzY0MGYwYmQ4NTiu9ens: 01:26:42.255 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDk2ZWRhMGY5MGQ4MmY1MTM4MzhkNzk3YjQzM2I1YzcxNDhjMjU0YzA1YzE3YWE2ZTQ2MDk4ZDFiMjMzNTEwNmjUH6E=: 01:26:42.255 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:26:42.255 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:26:42.255 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDg5OGQ1NDhjNTUyM2QzZGE0YzE2YzY0MGYwYmQ4NTiu9ens: 01:26:42.255 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDk2ZWRhMGY5MGQ4MmY1MTM4MzhkNzk3YjQzM2I1YzcxNDhjMjU0YzA1YzE3YWE2ZTQ2MDk4ZDFiMjMzNTEwNmjUH6E=: ]] 01:26:42.255 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDk2ZWRhMGY5MGQ4MmY1MTM4MzhkNzk3YjQzM2I1YzcxNDhjMjU0YzA1YzE3YWE2ZTQ2MDk4ZDFiMjMzNTEwNmjUH6E=: 01:26:42.255 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 01:26:42.255 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:26:42.255 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:26:42.255 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:26:42.255 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:26:42.255 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:26:42.255 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:26:42.255 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:42.255 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:42.255 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:42.255 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:26:42.255 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:26:42.255 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:42.255 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:42.255 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:42.255 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:42.255 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:42.255 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:42.255 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:42.255 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:42.255 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:42.255 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:26:42.255 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:42.255 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:42.516 nvme0n1 01:26:42.516 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:42.516 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:26:42.516 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:26:42.516 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:42.516 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:42.516 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:42.516 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:26:42.516 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:26:42.516 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:42.516 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:42.516 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:42.516 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:26:42.516 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 01:26:42.516 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:26:42.516 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:26:42.516 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:26:42.516 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:26:42.516 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGNmZDA2YTliNzIxYWZiMGIxOWI3MjcwNDIzNGQwYjA0OTllM2MxMWJlNDk1ODIw3m40Mg==: 01:26:42.516 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2UyMjIwODI3YjNmZGU5N2U5M2I0YTQxMDYzYWVkZmQ3ZGQxNWIzMGZkNDM1NTA3Mp/OWA==: 01:26:42.516 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:26:42.516 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:26:42.516 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGNmZDA2YTliNzIxYWZiMGIxOWI3MjcwNDIzNGQwYjA0OTllM2MxMWJlNDk1ODIw3m40Mg==: 01:26:42.516 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2UyMjIwODI3YjNmZGU5N2U5M2I0YTQxMDYzYWVkZmQ3ZGQxNWIzMGZkNDM1NTA3Mp/OWA==: ]] 01:26:42.516 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2UyMjIwODI3YjNmZGU5N2U5M2I0YTQxMDYzYWVkZmQ3ZGQxNWIzMGZkNDM1NTA3Mp/OWA==: 01:26:42.516 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 01:26:42.516 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:26:42.516 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:26:42.516 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:26:42.516 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:26:42.516 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:26:42.516 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:26:42.516 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:42.516 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:42.516 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:42.516 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:26:42.516 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:26:42.516 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:42.516 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:42.516 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:42.516 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:42.516 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:42.516 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:42.516 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:42.516 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:42.516 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:42.516 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:26:42.516 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:42.516 05:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:42.776 nvme0n1 01:26:42.776 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:42.776 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:26:42.776 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:26:42.776 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:42.776 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:42.776 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:42.776 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:26:42.776 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:26:42.776 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:42.776 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:42.776 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:42.776 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:26:42.776 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 01:26:42.776 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:26:42.776 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:26:42.776 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:26:42.776 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:26:42.776 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YThlMDI0MmFhMDAzZjc3ZGI5NjI4NGYxOTNhMzdmMmLmgnvi: 01:26:42.776 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGE4ODRiZDgwY2VlZjM4NDFhMDE2ZTYyMzkzMTYyYzakYML0: 01:26:42.776 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:26:42.776 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:26:42.776 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YThlMDI0MmFhMDAzZjc3ZGI5NjI4NGYxOTNhMzdmMmLmgnvi: 01:26:42.776 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGE4ODRiZDgwY2VlZjM4NDFhMDE2ZTYyMzkzMTYyYzakYML0: ]] 01:26:42.776 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGE4ODRiZDgwY2VlZjM4NDFhMDE2ZTYyMzkzMTYyYzakYML0: 01:26:42.776 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 01:26:42.776 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:26:42.776 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:26:42.776 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:26:42.776 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:26:42.776 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:26:42.776 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:26:42.776 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:42.776 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:42.776 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:42.776 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:26:42.776 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:26:42.776 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:42.776 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:42.776 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:42.776 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:42.776 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:42.776 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:42.776 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:42.776 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:42.776 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:42.776 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:26:42.776 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:42.776 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:43.035 nvme0n1 01:26:43.035 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:43.035 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:26:43.035 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:26:43.035 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:43.035 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:43.035 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:43.035 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:26:43.035 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:26:43.035 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:43.035 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:43.035 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:43.035 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:26:43.035 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 01:26:43.035 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:26:43.035 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:26:43.035 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:26:43.035 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:26:43.035 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjYwYjhhNzk3M2VjM2Q5ZWQ0YTRlMWY4MzlhN2UyZTNkMWYzMmFiYjI5YTc5MDdhoHn+8w==: 01:26:43.035 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTIzNDE3ODdjMDRkMzM2MGJkNmE1YTQwN2UzMGMzYmYMWQfu: 01:26:43.035 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:26:43.035 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:26:43.035 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjYwYjhhNzk3M2VjM2Q5ZWQ0YTRlMWY4MzlhN2UyZTNkMWYzMmFiYjI5YTc5MDdhoHn+8w==: 01:26:43.035 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTIzNDE3ODdjMDRkMzM2MGJkNmE1YTQwN2UzMGMzYmYMWQfu: ]] 01:26:43.035 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTIzNDE3ODdjMDRkMzM2MGJkNmE1YTQwN2UzMGMzYmYMWQfu: 01:26:43.035 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 01:26:43.035 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:26:43.035 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:26:43.035 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:26:43.035 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:26:43.035 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:26:43.035 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:26:43.035 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:43.035 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:43.035 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:43.035 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:26:43.035 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:26:43.035 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:43.035 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:43.035 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:43.035 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:43.035 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:43.035 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:43.035 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:43.035 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:43.035 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:43.035 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:26:43.035 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:43.035 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:43.294 nvme0n1 01:26:43.294 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:43.294 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:26:43.294 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:26:43.294 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:43.294 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:43.294 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:43.294 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:26:43.294 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:26:43.294 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:43.294 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:43.294 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:43.294 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:26:43.294 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 01:26:43.294 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:26:43.294 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:26:43.294 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:26:43.294 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:26:43.294 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmViMDJhYTE4YWVmMjNkYzgzMjhiOGNmNDJhNzM3ZGM3YTljOTc1Mzg1ZDc5MzcyMTdkMDUxMDdjN2YwOGM0MjfIqi4=: 01:26:43.294 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:26:43.294 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:26:43.294 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:26:43.294 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmViMDJhYTE4YWVmMjNkYzgzMjhiOGNmNDJhNzM3ZGM3YTljOTc1Mzg1ZDc5MzcyMTdkMDUxMDdjN2YwOGM0MjfIqi4=: 01:26:43.294 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:26:43.294 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 01:26:43.294 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:26:43.294 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:26:43.294 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:26:43.294 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:26:43.294 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:26:43.294 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:26:43.294 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:43.294 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:43.294 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:43.294 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:26:43.294 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:26:43.294 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:43.294 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:43.294 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:43.294 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:43.294 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:43.294 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:43.294 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:43.294 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:43.294 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:43.294 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:26:43.294 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:43.294 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:43.554 nvme0n1 01:26:43.554 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:43.554 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:26:43.554 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:26:43.554 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:43.554 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:43.554 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:43.554 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:26:43.554 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:26:43.554 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:43.554 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:43.554 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:43.554 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:26:43.554 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:26:43.554 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 01:26:43.554 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:26:43.554 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:26:43.554 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:26:43.554 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:26:43.554 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDg5OGQ1NDhjNTUyM2QzZGE0YzE2YzY0MGYwYmQ4NTiu9ens: 01:26:43.554 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDk2ZWRhMGY5MGQ4MmY1MTM4MzhkNzk3YjQzM2I1YzcxNDhjMjU0YzA1YzE3YWE2ZTQ2MDk4ZDFiMjMzNTEwNmjUH6E=: 01:26:43.554 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:26:43.554 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:26:43.554 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDg5OGQ1NDhjNTUyM2QzZGE0YzE2YzY0MGYwYmQ4NTiu9ens: 01:26:43.554 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDk2ZWRhMGY5MGQ4MmY1MTM4MzhkNzk3YjQzM2I1YzcxNDhjMjU0YzA1YzE3YWE2ZTQ2MDk4ZDFiMjMzNTEwNmjUH6E=: ]] 01:26:43.554 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDk2ZWRhMGY5MGQ4MmY1MTM4MzhkNzk3YjQzM2I1YzcxNDhjMjU0YzA1YzE3YWE2ZTQ2MDk4ZDFiMjMzNTEwNmjUH6E=: 01:26:43.554 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 01:26:43.554 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:26:43.554 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:26:43.554 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:26:43.554 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:26:43.554 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:26:43.554 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:26:43.554 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:43.554 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:43.554 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:43.554 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:26:43.554 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:26:43.554 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:43.554 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:43.554 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:43.554 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:43.554 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:43.554 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:43.554 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:43.554 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:43.554 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:43.554 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:26:43.554 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:43.554 05:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:44.122 nvme0n1 01:26:44.122 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:44.122 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:26:44.122 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:26:44.122 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:44.122 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:44.122 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:44.122 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:26:44.122 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:26:44.122 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:44.122 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:44.122 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:44.122 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:26:44.122 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 01:26:44.122 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:26:44.122 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:26:44.122 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:26:44.122 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:26:44.123 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGNmZDA2YTliNzIxYWZiMGIxOWI3MjcwNDIzNGQwYjA0OTllM2MxMWJlNDk1ODIw3m40Mg==: 01:26:44.123 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2UyMjIwODI3YjNmZGU5N2U5M2I0YTQxMDYzYWVkZmQ3ZGQxNWIzMGZkNDM1NTA3Mp/OWA==: 01:26:44.123 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:26:44.123 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:26:44.123 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGNmZDA2YTliNzIxYWZiMGIxOWI3MjcwNDIzNGQwYjA0OTllM2MxMWJlNDk1ODIw3m40Mg==: 01:26:44.123 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2UyMjIwODI3YjNmZGU5N2U5M2I0YTQxMDYzYWVkZmQ3ZGQxNWIzMGZkNDM1NTA3Mp/OWA==: ]] 01:26:44.123 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2UyMjIwODI3YjNmZGU5N2U5M2I0YTQxMDYzYWVkZmQ3ZGQxNWIzMGZkNDM1NTA3Mp/OWA==: 01:26:44.123 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 01:26:44.123 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:26:44.123 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:26:44.123 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:26:44.123 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:26:44.123 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:26:44.123 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:26:44.123 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:44.123 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:44.123 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:44.123 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:26:44.123 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:26:44.123 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:44.123 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:44.123 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:44.123 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:44.123 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:44.123 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:44.123 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:44.123 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:44.123 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:44.123 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:26:44.123 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:44.123 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:44.382 nvme0n1 01:26:44.382 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:44.382 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:26:44.382 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:44.382 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:26:44.382 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:44.382 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:44.382 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:26:44.382 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:26:44.382 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:44.382 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:44.382 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:44.382 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:26:44.382 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 01:26:44.382 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:26:44.382 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:26:44.382 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:26:44.382 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:26:44.382 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YThlMDI0MmFhMDAzZjc3ZGI5NjI4NGYxOTNhMzdmMmLmgnvi: 01:26:44.382 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGE4ODRiZDgwY2VlZjM4NDFhMDE2ZTYyMzkzMTYyYzakYML0: 01:26:44.382 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:26:44.382 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:26:44.382 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YThlMDI0MmFhMDAzZjc3ZGI5NjI4NGYxOTNhMzdmMmLmgnvi: 01:26:44.382 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGE4ODRiZDgwY2VlZjM4NDFhMDE2ZTYyMzkzMTYyYzakYML0: ]] 01:26:44.382 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGE4ODRiZDgwY2VlZjM4NDFhMDE2ZTYyMzkzMTYyYzakYML0: 01:26:44.382 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 01:26:44.382 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:26:44.382 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:26:44.382 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:26:44.382 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:26:44.382 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:26:44.382 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:26:44.382 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:44.382 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:44.382 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:44.382 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:26:44.382 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:26:44.382 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:44.382 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:44.382 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:44.382 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:44.382 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:44.382 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:44.382 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:44.382 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:44.382 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:44.382 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:26:44.382 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:44.382 05:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:44.949 nvme0n1 01:26:44.949 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:44.949 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:26:44.949 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:26:44.949 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:44.949 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:44.949 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:44.949 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:26:44.949 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:26:44.949 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:44.949 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:44.949 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:44.950 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:26:44.950 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 01:26:44.950 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:26:44.950 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:26:44.950 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:26:44.950 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:26:44.950 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjYwYjhhNzk3M2VjM2Q5ZWQ0YTRlMWY4MzlhN2UyZTNkMWYzMmFiYjI5YTc5MDdhoHn+8w==: 01:26:44.950 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTIzNDE3ODdjMDRkMzM2MGJkNmE1YTQwN2UzMGMzYmYMWQfu: 01:26:44.950 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:26:44.950 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:26:44.950 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjYwYjhhNzk3M2VjM2Q5ZWQ0YTRlMWY4MzlhN2UyZTNkMWYzMmFiYjI5YTc5MDdhoHn+8w==: 01:26:44.950 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTIzNDE3ODdjMDRkMzM2MGJkNmE1YTQwN2UzMGMzYmYMWQfu: ]] 01:26:44.950 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTIzNDE3ODdjMDRkMzM2MGJkNmE1YTQwN2UzMGMzYmYMWQfu: 01:26:44.950 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 01:26:44.950 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:26:44.950 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:26:44.950 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:26:44.950 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:26:44.950 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:26:44.950 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:26:44.950 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:44.950 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:44.950 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:44.950 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:26:44.950 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:26:44.950 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:44.950 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:44.950 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:44.950 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:44.950 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:44.950 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:44.950 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:44.950 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:44.950 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:44.950 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:26:44.950 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:44.950 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:45.209 nvme0n1 01:26:45.209 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:45.209 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:26:45.209 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:26:45.209 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:45.209 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:45.209 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:45.209 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:26:45.210 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:26:45.210 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:45.210 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:45.210 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:45.210 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:26:45.210 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 01:26:45.210 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:26:45.210 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:26:45.210 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:26:45.210 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:26:45.210 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmViMDJhYTE4YWVmMjNkYzgzMjhiOGNmNDJhNzM3ZGM3YTljOTc1Mzg1ZDc5MzcyMTdkMDUxMDdjN2YwOGM0MjfIqi4=: 01:26:45.210 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:26:45.210 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:26:45.210 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:26:45.210 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmViMDJhYTE4YWVmMjNkYzgzMjhiOGNmNDJhNzM3ZGM3YTljOTc1Mzg1ZDc5MzcyMTdkMDUxMDdjN2YwOGM0MjfIqi4=: 01:26:45.210 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:26:45.210 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 01:26:45.210 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:26:45.210 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:26:45.210 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:26:45.210 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:26:45.210 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:26:45.210 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:26:45.210 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:45.210 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:45.210 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:45.210 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:26:45.210 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:26:45.210 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:45.210 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:45.210 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:45.210 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:45.210 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:45.210 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:45.210 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:45.210 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:45.210 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:45.210 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:26:45.210 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:45.210 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:45.778 nvme0n1 01:26:45.779 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:45.779 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:26:45.779 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:26:45.779 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:45.779 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:45.779 05:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:45.779 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:26:45.779 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:26:45.779 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:45.779 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:45.779 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:45.779 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:26:45.779 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:26:45.779 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 01:26:45.779 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:26:45.779 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:26:45.779 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:26:45.779 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:26:45.779 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDg5OGQ1NDhjNTUyM2QzZGE0YzE2YzY0MGYwYmQ4NTiu9ens: 01:26:45.779 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDk2ZWRhMGY5MGQ4MmY1MTM4MzhkNzk3YjQzM2I1YzcxNDhjMjU0YzA1YzE3YWE2ZTQ2MDk4ZDFiMjMzNTEwNmjUH6E=: 01:26:45.779 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:26:45.779 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:26:45.779 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDg5OGQ1NDhjNTUyM2QzZGE0YzE2YzY0MGYwYmQ4NTiu9ens: 01:26:45.779 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDk2ZWRhMGY5MGQ4MmY1MTM4MzhkNzk3YjQzM2I1YzcxNDhjMjU0YzA1YzE3YWE2ZTQ2MDk4ZDFiMjMzNTEwNmjUH6E=: ]] 01:26:45.779 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDk2ZWRhMGY5MGQ4MmY1MTM4MzhkNzk3YjQzM2I1YzcxNDhjMjU0YzA1YzE3YWE2ZTQ2MDk4ZDFiMjMzNTEwNmjUH6E=: 01:26:45.779 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 01:26:45.779 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:26:45.779 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:26:45.779 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:26:45.779 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:26:45.779 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:26:45.779 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:26:45.779 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:45.779 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:45.779 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:45.779 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:26:45.779 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:26:45.779 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:45.779 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:45.779 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:45.779 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:45.779 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:45.779 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:45.779 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:45.779 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:45.779 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:45.779 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:26:45.779 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:45.779 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:46.346 nvme0n1 01:26:46.346 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:46.346 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:26:46.346 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:46.346 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:26:46.346 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:46.346 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:46.346 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:26:46.346 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:26:46.346 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:46.346 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:46.346 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:46.346 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:26:46.346 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 01:26:46.346 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:26:46.346 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:26:46.346 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:26:46.346 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:26:46.346 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGNmZDA2YTliNzIxYWZiMGIxOWI3MjcwNDIzNGQwYjA0OTllM2MxMWJlNDk1ODIw3m40Mg==: 01:26:46.346 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2UyMjIwODI3YjNmZGU5N2U5M2I0YTQxMDYzYWVkZmQ3ZGQxNWIzMGZkNDM1NTA3Mp/OWA==: 01:26:46.346 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:26:46.346 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:26:46.346 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGNmZDA2YTliNzIxYWZiMGIxOWI3MjcwNDIzNGQwYjA0OTllM2MxMWJlNDk1ODIw3m40Mg==: 01:26:46.346 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2UyMjIwODI3YjNmZGU5N2U5M2I0YTQxMDYzYWVkZmQ3ZGQxNWIzMGZkNDM1NTA3Mp/OWA==: ]] 01:26:46.346 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2UyMjIwODI3YjNmZGU5N2U5M2I0YTQxMDYzYWVkZmQ3ZGQxNWIzMGZkNDM1NTA3Mp/OWA==: 01:26:46.346 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 01:26:46.346 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:26:46.346 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:26:46.346 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:26:46.346 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:26:46.346 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:26:46.346 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:26:46.346 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:46.346 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:46.346 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:46.346 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:26:46.346 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:26:46.346 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:46.346 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:46.346 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:46.346 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:46.346 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:46.346 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:46.346 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:46.346 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:46.346 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:46.346 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:26:46.346 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:46.346 05:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:46.914 nvme0n1 01:26:46.914 05:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:46.914 05:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:26:46.914 05:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:26:46.914 05:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:46.914 05:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:46.914 05:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:46.914 05:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:26:46.914 05:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:26:46.914 05:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:46.914 05:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:46.914 05:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:46.914 05:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:26:46.914 05:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 01:26:46.914 05:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:26:46.914 05:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:26:46.914 05:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:26:46.914 05:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:26:46.914 05:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YThlMDI0MmFhMDAzZjc3ZGI5NjI4NGYxOTNhMzdmMmLmgnvi: 01:26:46.914 05:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGE4ODRiZDgwY2VlZjM4NDFhMDE2ZTYyMzkzMTYyYzakYML0: 01:26:46.914 05:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:26:46.914 05:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:26:46.914 05:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YThlMDI0MmFhMDAzZjc3ZGI5NjI4NGYxOTNhMzdmMmLmgnvi: 01:26:46.914 05:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGE4ODRiZDgwY2VlZjM4NDFhMDE2ZTYyMzkzMTYyYzakYML0: ]] 01:26:46.914 05:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGE4ODRiZDgwY2VlZjM4NDFhMDE2ZTYyMzkzMTYyYzakYML0: 01:26:46.914 05:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 01:26:46.914 05:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:26:46.914 05:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:26:46.914 05:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:26:46.914 05:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:26:46.914 05:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:26:46.914 05:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:26:46.914 05:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:46.914 05:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:47.173 05:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:47.173 05:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:26:47.173 05:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:26:47.173 05:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:47.173 05:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:47.173 05:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:47.173 05:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:47.173 05:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:47.173 05:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:47.173 05:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:47.173 05:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:47.173 05:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:47.173 05:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:26:47.173 05:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:47.173 05:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:47.742 nvme0n1 01:26:47.742 05:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:47.742 05:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:26:47.742 05:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:26:47.742 05:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:47.742 05:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:47.742 05:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:47.742 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:26:47.742 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:26:47.742 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:47.742 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:47.742 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:47.742 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:26:47.742 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 01:26:47.742 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:26:47.742 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:26:47.742 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:26:47.742 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:26:47.742 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjYwYjhhNzk3M2VjM2Q5ZWQ0YTRlMWY4MzlhN2UyZTNkMWYzMmFiYjI5YTc5MDdhoHn+8w==: 01:26:47.742 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTIzNDE3ODdjMDRkMzM2MGJkNmE1YTQwN2UzMGMzYmYMWQfu: 01:26:47.742 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:26:47.742 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:26:47.742 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjYwYjhhNzk3M2VjM2Q5ZWQ0YTRlMWY4MzlhN2UyZTNkMWYzMmFiYjI5YTc5MDdhoHn+8w==: 01:26:47.742 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTIzNDE3ODdjMDRkMzM2MGJkNmE1YTQwN2UzMGMzYmYMWQfu: ]] 01:26:47.742 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTIzNDE3ODdjMDRkMzM2MGJkNmE1YTQwN2UzMGMzYmYMWQfu: 01:26:47.742 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 01:26:47.742 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:26:47.742 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:26:47.742 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:26:47.742 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:26:47.742 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:26:47.742 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:26:47.742 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:47.742 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:47.742 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:47.742 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:26:47.742 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:26:47.742 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:47.742 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:47.742 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:47.742 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:47.742 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:47.742 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:47.742 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:47.742 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:47.742 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:47.742 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:26:47.742 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:47.742 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:48.309 nvme0n1 01:26:48.309 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:48.309 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:26:48.309 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:26:48.309 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:48.309 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:48.309 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:48.309 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:26:48.309 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:26:48.309 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:48.309 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:48.309 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:48.309 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:26:48.309 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 01:26:48.309 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:26:48.309 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:26:48.309 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:26:48.309 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:26:48.309 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmViMDJhYTE4YWVmMjNkYzgzMjhiOGNmNDJhNzM3ZGM3YTljOTc1Mzg1ZDc5MzcyMTdkMDUxMDdjN2YwOGM0MjfIqi4=: 01:26:48.309 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:26:48.309 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:26:48.309 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:26:48.309 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmViMDJhYTE4YWVmMjNkYzgzMjhiOGNmNDJhNzM3ZGM3YTljOTc1Mzg1ZDc5MzcyMTdkMDUxMDdjN2YwOGM0MjfIqi4=: 01:26:48.309 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:26:48.309 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 01:26:48.309 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:26:48.309 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:26:48.309 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:26:48.309 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:26:48.309 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:26:48.309 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:26:48.309 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:48.309 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:48.309 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:48.309 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:26:48.309 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:26:48.309 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:48.309 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:48.309 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:48.309 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:48.309 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:48.309 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:48.309 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:48.309 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:48.309 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:48.309 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:26:48.309 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:48.309 05:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:48.877 nvme0n1 01:26:48.877 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:48.877 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:26:48.877 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:26:48.877 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:48.877 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:48.877 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:48.877 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:26:48.877 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:26:48.877 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:48.877 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:48.877 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:48.877 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 01:26:48.877 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:26:48.877 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:26:48.877 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 01:26:48.877 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:26:48.877 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:26:48.877 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:26:48.877 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:26:48.877 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDg5OGQ1NDhjNTUyM2QzZGE0YzE2YzY0MGYwYmQ4NTiu9ens: 01:26:48.877 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDk2ZWRhMGY5MGQ4MmY1MTM4MzhkNzk3YjQzM2I1YzcxNDhjMjU0YzA1YzE3YWE2ZTQ2MDk4ZDFiMjMzNTEwNmjUH6E=: 01:26:48.877 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:26:48.877 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:26:48.877 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDg5OGQ1NDhjNTUyM2QzZGE0YzE2YzY0MGYwYmQ4NTiu9ens: 01:26:48.877 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDk2ZWRhMGY5MGQ4MmY1MTM4MzhkNzk3YjQzM2I1YzcxNDhjMjU0YzA1YzE3YWE2ZTQ2MDk4ZDFiMjMzNTEwNmjUH6E=: ]] 01:26:48.877 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDk2ZWRhMGY5MGQ4MmY1MTM4MzhkNzk3YjQzM2I1YzcxNDhjMjU0YzA1YzE3YWE2ZTQ2MDk4ZDFiMjMzNTEwNmjUH6E=: 01:26:48.877 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 01:26:48.877 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:26:48.877 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:26:48.877 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:26:48.877 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:26:48.877 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:26:48.877 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:26:48.877 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:48.877 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:49.137 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:49.137 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:26:49.137 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:26:49.137 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:49.137 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:49.137 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:49.137 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:49.137 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:49.137 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:49.137 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:49.137 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:49.137 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:49.137 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:26:49.137 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:49.137 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:49.137 nvme0n1 01:26:49.137 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:49.137 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:26:49.137 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:26:49.137 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:49.137 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:49.137 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:49.137 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:26:49.137 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:26:49.137 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:49.137 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:49.137 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:49.137 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:26:49.137 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 01:26:49.137 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:26:49.137 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:26:49.137 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:26:49.137 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:26:49.137 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGNmZDA2YTliNzIxYWZiMGIxOWI3MjcwNDIzNGQwYjA0OTllM2MxMWJlNDk1ODIw3m40Mg==: 01:26:49.137 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2UyMjIwODI3YjNmZGU5N2U5M2I0YTQxMDYzYWVkZmQ3ZGQxNWIzMGZkNDM1NTA3Mp/OWA==: 01:26:49.137 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:26:49.137 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:26:49.137 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGNmZDA2YTliNzIxYWZiMGIxOWI3MjcwNDIzNGQwYjA0OTllM2MxMWJlNDk1ODIw3m40Mg==: 01:26:49.137 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2UyMjIwODI3YjNmZGU5N2U5M2I0YTQxMDYzYWVkZmQ3ZGQxNWIzMGZkNDM1NTA3Mp/OWA==: ]] 01:26:49.137 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2UyMjIwODI3YjNmZGU5N2U5M2I0YTQxMDYzYWVkZmQ3ZGQxNWIzMGZkNDM1NTA3Mp/OWA==: 01:26:49.137 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 01:26:49.137 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:26:49.137 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:26:49.137 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:26:49.137 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:26:49.137 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:26:49.137 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:26:49.137 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:49.137 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:49.137 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:49.137 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:26:49.137 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:26:49.137 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:49.137 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:49.137 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:49.137 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:49.137 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:49.137 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:49.137 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:49.137 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:49.137 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:49.137 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:26:49.137 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:49.137 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:49.396 nvme0n1 01:26:49.396 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:49.396 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:26:49.396 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:26:49.396 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:49.396 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:49.396 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:49.396 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:26:49.396 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:26:49.396 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:49.396 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:49.396 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:49.396 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:26:49.396 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 01:26:49.396 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:26:49.396 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:26:49.396 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:26:49.396 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:26:49.396 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YThlMDI0MmFhMDAzZjc3ZGI5NjI4NGYxOTNhMzdmMmLmgnvi: 01:26:49.396 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGE4ODRiZDgwY2VlZjM4NDFhMDE2ZTYyMzkzMTYyYzakYML0: 01:26:49.396 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:26:49.396 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:26:49.396 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YThlMDI0MmFhMDAzZjc3ZGI5NjI4NGYxOTNhMzdmMmLmgnvi: 01:26:49.396 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGE4ODRiZDgwY2VlZjM4NDFhMDE2ZTYyMzkzMTYyYzakYML0: ]] 01:26:49.396 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGE4ODRiZDgwY2VlZjM4NDFhMDE2ZTYyMzkzMTYyYzakYML0: 01:26:49.397 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 01:26:49.397 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:26:49.397 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:26:49.397 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:26:49.397 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:26:49.397 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:26:49.397 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:26:49.397 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:49.397 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:49.397 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:49.397 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:26:49.397 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:26:49.397 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:49.397 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:49.397 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:49.397 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:49.397 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:49.397 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:49.397 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:49.397 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:49.397 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:49.397 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:26:49.397 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:49.397 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:49.397 nvme0n1 01:26:49.397 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:49.397 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:26:49.397 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:26:49.397 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:49.397 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:49.397 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:49.667 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:26:49.667 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:26:49.667 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:49.667 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:49.667 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:49.667 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:26:49.667 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 01:26:49.667 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:26:49.667 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:26:49.667 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:26:49.667 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:26:49.667 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjYwYjhhNzk3M2VjM2Q5ZWQ0YTRlMWY4MzlhN2UyZTNkMWYzMmFiYjI5YTc5MDdhoHn+8w==: 01:26:49.667 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTIzNDE3ODdjMDRkMzM2MGJkNmE1YTQwN2UzMGMzYmYMWQfu: 01:26:49.667 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:26:49.667 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:26:49.667 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjYwYjhhNzk3M2VjM2Q5ZWQ0YTRlMWY4MzlhN2UyZTNkMWYzMmFiYjI5YTc5MDdhoHn+8w==: 01:26:49.667 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTIzNDE3ODdjMDRkMzM2MGJkNmE1YTQwN2UzMGMzYmYMWQfu: ]] 01:26:49.667 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTIzNDE3ODdjMDRkMzM2MGJkNmE1YTQwN2UzMGMzYmYMWQfu: 01:26:49.667 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 01:26:49.667 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:26:49.667 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:26:49.667 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:26:49.667 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:26:49.667 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:26:49.667 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:26:49.667 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:49.667 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:49.667 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:49.667 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:26:49.667 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:26:49.667 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:49.667 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:49.667 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:49.667 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:49.667 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:49.667 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:49.667 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:49.667 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:49.667 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:49.667 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:26:49.667 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:49.667 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:49.667 nvme0n1 01:26:49.667 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:49.667 05:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:26:49.667 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:49.667 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:26:49.667 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:49.667 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:49.667 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:26:49.667 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:26:49.667 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:49.667 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:49.667 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:49.667 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:26:49.667 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 01:26:49.667 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:26:49.667 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:26:49.667 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:26:49.667 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:26:49.667 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmViMDJhYTE4YWVmMjNkYzgzMjhiOGNmNDJhNzM3ZGM3YTljOTc1Mzg1ZDc5MzcyMTdkMDUxMDdjN2YwOGM0MjfIqi4=: 01:26:49.667 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:26:49.667 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:26:49.667 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:26:49.667 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmViMDJhYTE4YWVmMjNkYzgzMjhiOGNmNDJhNzM3ZGM3YTljOTc1Mzg1ZDc5MzcyMTdkMDUxMDdjN2YwOGM0MjfIqi4=: 01:26:49.667 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:26:49.667 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 01:26:49.667 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:26:49.667 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:26:49.667 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:26:49.667 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:26:49.667 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:26:49.667 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:26:49.667 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:49.667 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:49.667 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:49.667 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:26:49.667 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:26:49.667 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:49.667 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:49.667 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:49.667 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:49.667 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:49.667 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:49.667 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:49.667 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:49.667 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:49.667 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:26:49.667 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:49.667 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:49.943 nvme0n1 01:26:49.943 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:49.943 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:26:49.943 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:26:49.943 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:49.943 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:49.943 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:49.943 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:26:49.943 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:26:49.943 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:49.943 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:49.943 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:49.943 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:26:49.943 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:26:49.943 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 01:26:49.943 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:26:49.943 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:26:49.943 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:26:49.943 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:26:49.943 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDg5OGQ1NDhjNTUyM2QzZGE0YzE2YzY0MGYwYmQ4NTiu9ens: 01:26:49.943 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDk2ZWRhMGY5MGQ4MmY1MTM4MzhkNzk3YjQzM2I1YzcxNDhjMjU0YzA1YzE3YWE2ZTQ2MDk4ZDFiMjMzNTEwNmjUH6E=: 01:26:49.943 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:26:49.943 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:26:49.943 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDg5OGQ1NDhjNTUyM2QzZGE0YzE2YzY0MGYwYmQ4NTiu9ens: 01:26:49.943 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDk2ZWRhMGY5MGQ4MmY1MTM4MzhkNzk3YjQzM2I1YzcxNDhjMjU0YzA1YzE3YWE2ZTQ2MDk4ZDFiMjMzNTEwNmjUH6E=: ]] 01:26:49.943 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDk2ZWRhMGY5MGQ4MmY1MTM4MzhkNzk3YjQzM2I1YzcxNDhjMjU0YzA1YzE3YWE2ZTQ2MDk4ZDFiMjMzNTEwNmjUH6E=: 01:26:49.943 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 01:26:49.943 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:26:49.943 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:26:49.943 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:26:49.943 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:26:49.943 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:26:49.943 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:26:49.943 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:49.943 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:49.943 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:49.943 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:26:49.943 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:26:49.943 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:49.943 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:49.943 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:49.943 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:49.943 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:49.943 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:49.943 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:49.943 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:49.943 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:49.943 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:26:49.943 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:49.943 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:49.943 nvme0n1 01:26:49.943 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:49.943 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:26:49.943 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:26:49.943 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:49.943 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:50.203 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:50.203 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:26:50.203 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:26:50.203 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:50.203 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:50.203 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:50.203 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:26:50.203 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 01:26:50.203 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:26:50.203 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:26:50.203 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:26:50.203 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:26:50.203 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGNmZDA2YTliNzIxYWZiMGIxOWI3MjcwNDIzNGQwYjA0OTllM2MxMWJlNDk1ODIw3m40Mg==: 01:26:50.203 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2UyMjIwODI3YjNmZGU5N2U5M2I0YTQxMDYzYWVkZmQ3ZGQxNWIzMGZkNDM1NTA3Mp/OWA==: 01:26:50.203 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:26:50.203 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:26:50.203 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGNmZDA2YTliNzIxYWZiMGIxOWI3MjcwNDIzNGQwYjA0OTllM2MxMWJlNDk1ODIw3m40Mg==: 01:26:50.203 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2UyMjIwODI3YjNmZGU5N2U5M2I0YTQxMDYzYWVkZmQ3ZGQxNWIzMGZkNDM1NTA3Mp/OWA==: ]] 01:26:50.203 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2UyMjIwODI3YjNmZGU5N2U5M2I0YTQxMDYzYWVkZmQ3ZGQxNWIzMGZkNDM1NTA3Mp/OWA==: 01:26:50.203 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 01:26:50.203 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:26:50.203 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:26:50.203 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:26:50.203 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:26:50.203 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:26:50.203 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:26:50.203 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:50.203 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:50.203 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:50.204 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:26:50.204 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:26:50.204 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:50.204 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:50.204 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:50.204 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:50.204 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:50.204 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:50.204 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:50.204 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:50.204 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:50.204 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:26:50.204 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:50.204 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:50.204 nvme0n1 01:26:50.204 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:50.204 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:26:50.204 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:50.204 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:50.204 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:26:50.204 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:50.204 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:26:50.204 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:26:50.204 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:50.204 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:50.463 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:50.463 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:26:50.463 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 01:26:50.463 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:26:50.463 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:26:50.463 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:26:50.463 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:26:50.463 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YThlMDI0MmFhMDAzZjc3ZGI5NjI4NGYxOTNhMzdmMmLmgnvi: 01:26:50.463 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGE4ODRiZDgwY2VlZjM4NDFhMDE2ZTYyMzkzMTYyYzakYML0: 01:26:50.463 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:26:50.463 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:26:50.463 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YThlMDI0MmFhMDAzZjc3ZGI5NjI4NGYxOTNhMzdmMmLmgnvi: 01:26:50.463 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGE4ODRiZDgwY2VlZjM4NDFhMDE2ZTYyMzkzMTYyYzakYML0: ]] 01:26:50.463 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGE4ODRiZDgwY2VlZjM4NDFhMDE2ZTYyMzkzMTYyYzakYML0: 01:26:50.463 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 01:26:50.463 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:26:50.463 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:26:50.463 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:26:50.463 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:26:50.463 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:26:50.463 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:26:50.463 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:50.463 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:50.463 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:50.463 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:26:50.463 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:26:50.463 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:50.463 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:50.463 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:50.463 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:50.463 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:50.463 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:50.463 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:50.463 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:50.463 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:50.463 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:26:50.463 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:50.463 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:50.463 nvme0n1 01:26:50.463 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:50.463 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:26:50.463 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:50.464 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:26:50.464 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:50.464 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:50.464 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:26:50.464 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:26:50.464 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:50.464 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:50.464 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:50.464 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:26:50.464 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 01:26:50.464 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:26:50.464 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:26:50.464 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:26:50.464 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:26:50.464 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjYwYjhhNzk3M2VjM2Q5ZWQ0YTRlMWY4MzlhN2UyZTNkMWYzMmFiYjI5YTc5MDdhoHn+8w==: 01:26:50.464 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTIzNDE3ODdjMDRkMzM2MGJkNmE1YTQwN2UzMGMzYmYMWQfu: 01:26:50.464 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:26:50.464 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:26:50.464 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjYwYjhhNzk3M2VjM2Q5ZWQ0YTRlMWY4MzlhN2UyZTNkMWYzMmFiYjI5YTc5MDdhoHn+8w==: 01:26:50.464 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTIzNDE3ODdjMDRkMzM2MGJkNmE1YTQwN2UzMGMzYmYMWQfu: ]] 01:26:50.464 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTIzNDE3ODdjMDRkMzM2MGJkNmE1YTQwN2UzMGMzYmYMWQfu: 01:26:50.464 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 01:26:50.464 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:26:50.464 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:26:50.464 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:26:50.464 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:26:50.464 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:26:50.464 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:26:50.464 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:50.464 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:50.464 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:50.464 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:26:50.464 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:26:50.464 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:50.464 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:50.464 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:50.464 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:50.464 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:50.464 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:50.464 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:50.464 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:50.464 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:50.464 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:26:50.464 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:50.464 05:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:50.723 nvme0n1 01:26:50.723 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:50.723 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:26:50.723 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:26:50.723 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:50.723 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:50.723 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:50.723 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:26:50.723 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:26:50.723 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:50.723 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:50.723 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:50.723 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:26:50.723 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 01:26:50.723 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:26:50.723 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:26:50.723 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:26:50.723 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:26:50.723 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmViMDJhYTE4YWVmMjNkYzgzMjhiOGNmNDJhNzM3ZGM3YTljOTc1Mzg1ZDc5MzcyMTdkMDUxMDdjN2YwOGM0MjfIqi4=: 01:26:50.723 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:26:50.723 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:26:50.723 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:26:50.723 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmViMDJhYTE4YWVmMjNkYzgzMjhiOGNmNDJhNzM3ZGM3YTljOTc1Mzg1ZDc5MzcyMTdkMDUxMDdjN2YwOGM0MjfIqi4=: 01:26:50.723 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:26:50.723 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 01:26:50.723 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:26:50.723 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:26:50.723 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:26:50.723 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:26:50.723 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:26:50.723 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:26:50.723 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:50.723 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:50.723 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:50.723 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:26:50.723 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:26:50.723 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:50.723 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:50.723 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:50.723 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:50.723 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:50.723 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:50.723 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:50.723 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:50.723 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:50.723 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:26:50.723 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:50.723 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:50.985 nvme0n1 01:26:50.985 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:50.985 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:26:50.985 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:26:50.985 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:50.985 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:50.985 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:50.985 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:26:50.985 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:26:50.985 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:50.985 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:50.985 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:50.985 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:26:50.985 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:26:50.985 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 01:26:50.985 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:26:50.985 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:26:50.985 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:26:50.985 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:26:50.985 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDg5OGQ1NDhjNTUyM2QzZGE0YzE2YzY0MGYwYmQ4NTiu9ens: 01:26:50.985 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDk2ZWRhMGY5MGQ4MmY1MTM4MzhkNzk3YjQzM2I1YzcxNDhjMjU0YzA1YzE3YWE2ZTQ2MDk4ZDFiMjMzNTEwNmjUH6E=: 01:26:50.985 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:26:50.985 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:26:50.985 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDg5OGQ1NDhjNTUyM2QzZGE0YzE2YzY0MGYwYmQ4NTiu9ens: 01:26:50.985 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDk2ZWRhMGY5MGQ4MmY1MTM4MzhkNzk3YjQzM2I1YzcxNDhjMjU0YzA1YzE3YWE2ZTQ2MDk4ZDFiMjMzNTEwNmjUH6E=: ]] 01:26:50.985 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDk2ZWRhMGY5MGQ4MmY1MTM4MzhkNzk3YjQzM2I1YzcxNDhjMjU0YzA1YzE3YWE2ZTQ2MDk4ZDFiMjMzNTEwNmjUH6E=: 01:26:50.985 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 01:26:50.985 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:26:50.985 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:26:50.985 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:26:50.985 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:26:50.985 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:26:50.985 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:26:50.985 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:50.985 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:50.986 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:50.986 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:26:50.986 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:26:50.986 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:50.986 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:50.986 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:50.986 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:50.986 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:50.986 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:50.986 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:50.986 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:50.986 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:50.986 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:26:50.986 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:50.986 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:51.246 nvme0n1 01:26:51.246 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:51.246 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:26:51.246 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:26:51.246 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:51.246 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:51.246 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:51.246 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:26:51.246 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:26:51.246 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:51.246 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:51.246 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:51.246 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:26:51.246 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 01:26:51.246 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:26:51.246 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:26:51.246 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:26:51.246 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:26:51.246 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGNmZDA2YTliNzIxYWZiMGIxOWI3MjcwNDIzNGQwYjA0OTllM2MxMWJlNDk1ODIw3m40Mg==: 01:26:51.246 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2UyMjIwODI3YjNmZGU5N2U5M2I0YTQxMDYzYWVkZmQ3ZGQxNWIzMGZkNDM1NTA3Mp/OWA==: 01:26:51.246 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:26:51.246 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:26:51.246 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGNmZDA2YTliNzIxYWZiMGIxOWI3MjcwNDIzNGQwYjA0OTllM2MxMWJlNDk1ODIw3m40Mg==: 01:26:51.246 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2UyMjIwODI3YjNmZGU5N2U5M2I0YTQxMDYzYWVkZmQ3ZGQxNWIzMGZkNDM1NTA3Mp/OWA==: ]] 01:26:51.246 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2UyMjIwODI3YjNmZGU5N2U5M2I0YTQxMDYzYWVkZmQ3ZGQxNWIzMGZkNDM1NTA3Mp/OWA==: 01:26:51.246 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 01:26:51.246 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:26:51.246 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:26:51.246 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:26:51.246 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:26:51.246 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:26:51.246 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:26:51.246 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:51.246 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:51.246 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:51.246 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:26:51.246 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:26:51.246 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:51.246 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:51.246 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:51.246 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:51.246 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:51.246 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:51.246 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:51.246 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:51.246 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:51.246 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:26:51.246 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:51.246 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:51.521 nvme0n1 01:26:51.521 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:51.521 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:26:51.521 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:26:51.521 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:51.521 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:51.521 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:51.521 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:26:51.521 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:26:51.521 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:51.521 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:51.521 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:51.521 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:26:51.521 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 01:26:51.521 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:26:51.521 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:26:51.521 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:26:51.521 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:26:51.521 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YThlMDI0MmFhMDAzZjc3ZGI5NjI4NGYxOTNhMzdmMmLmgnvi: 01:26:51.521 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGE4ODRiZDgwY2VlZjM4NDFhMDE2ZTYyMzkzMTYyYzakYML0: 01:26:51.521 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:26:51.521 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:26:51.521 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YThlMDI0MmFhMDAzZjc3ZGI5NjI4NGYxOTNhMzdmMmLmgnvi: 01:26:51.521 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGE4ODRiZDgwY2VlZjM4NDFhMDE2ZTYyMzkzMTYyYzakYML0: ]] 01:26:51.521 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGE4ODRiZDgwY2VlZjM4NDFhMDE2ZTYyMzkzMTYyYzakYML0: 01:26:51.521 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 01:26:51.521 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:26:51.521 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:26:51.521 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:26:51.521 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:26:51.521 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:26:51.521 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:26:51.521 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:51.521 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:51.521 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:51.521 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:26:51.521 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:26:51.521 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:51.521 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:51.521 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:51.521 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:51.521 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:51.521 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:51.521 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:51.521 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:51.521 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:51.521 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:26:51.521 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:51.521 05:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:51.781 nvme0n1 01:26:51.781 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:51.781 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:26:51.781 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:26:51.781 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:51.781 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:51.781 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:51.781 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:26:51.781 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:26:51.781 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:51.781 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:51.781 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:51.781 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:26:51.781 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 01:26:51.781 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:26:51.781 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:26:51.781 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:26:51.781 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:26:51.781 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjYwYjhhNzk3M2VjM2Q5ZWQ0YTRlMWY4MzlhN2UyZTNkMWYzMmFiYjI5YTc5MDdhoHn+8w==: 01:26:51.781 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTIzNDE3ODdjMDRkMzM2MGJkNmE1YTQwN2UzMGMzYmYMWQfu: 01:26:51.781 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:26:51.781 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:26:51.781 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjYwYjhhNzk3M2VjM2Q5ZWQ0YTRlMWY4MzlhN2UyZTNkMWYzMmFiYjI5YTc5MDdhoHn+8w==: 01:26:51.781 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTIzNDE3ODdjMDRkMzM2MGJkNmE1YTQwN2UzMGMzYmYMWQfu: ]] 01:26:51.781 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTIzNDE3ODdjMDRkMzM2MGJkNmE1YTQwN2UzMGMzYmYMWQfu: 01:26:51.781 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 01:26:51.781 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:26:51.781 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:26:51.781 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:26:51.781 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:26:51.781 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:26:51.781 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:26:51.781 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:51.781 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:51.781 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:51.781 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:26:51.781 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:26:51.781 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:51.781 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:51.781 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:51.781 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:51.781 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:51.781 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:51.781 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:51.781 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:51.781 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:51.781 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:26:51.781 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:51.781 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:52.041 nvme0n1 01:26:52.041 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:52.041 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:26:52.041 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:26:52.041 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:52.041 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:52.041 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:52.041 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:26:52.041 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:26:52.041 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:52.041 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:52.041 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:52.041 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:26:52.041 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 01:26:52.041 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:26:52.041 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:26:52.041 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:26:52.041 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:26:52.041 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmViMDJhYTE4YWVmMjNkYzgzMjhiOGNmNDJhNzM3ZGM3YTljOTc1Mzg1ZDc5MzcyMTdkMDUxMDdjN2YwOGM0MjfIqi4=: 01:26:52.041 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:26:52.041 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:26:52.041 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:26:52.041 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmViMDJhYTE4YWVmMjNkYzgzMjhiOGNmNDJhNzM3ZGM3YTljOTc1Mzg1ZDc5MzcyMTdkMDUxMDdjN2YwOGM0MjfIqi4=: 01:26:52.041 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:26:52.041 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 01:26:52.041 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:26:52.041 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:26:52.041 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:26:52.041 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:26:52.041 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:26:52.041 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:26:52.041 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:52.041 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:52.041 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:52.041 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:26:52.041 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:26:52.041 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:52.041 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:52.041 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:52.041 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:52.041 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:52.041 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:52.041 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:52.041 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:52.041 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:52.041 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:26:52.041 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:52.041 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:52.301 nvme0n1 01:26:52.301 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:52.301 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:26:52.301 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:26:52.301 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:52.301 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:52.301 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:52.301 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:26:52.301 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:26:52.301 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:52.301 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:52.301 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:52.301 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:26:52.301 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:26:52.301 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 01:26:52.301 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:26:52.301 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:26:52.301 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:26:52.301 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:26:52.301 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDg5OGQ1NDhjNTUyM2QzZGE0YzE2YzY0MGYwYmQ4NTiu9ens: 01:26:52.301 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDk2ZWRhMGY5MGQ4MmY1MTM4MzhkNzk3YjQzM2I1YzcxNDhjMjU0YzA1YzE3YWE2ZTQ2MDk4ZDFiMjMzNTEwNmjUH6E=: 01:26:52.301 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:26:52.301 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:26:52.301 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDg5OGQ1NDhjNTUyM2QzZGE0YzE2YzY0MGYwYmQ4NTiu9ens: 01:26:52.301 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDk2ZWRhMGY5MGQ4MmY1MTM4MzhkNzk3YjQzM2I1YzcxNDhjMjU0YzA1YzE3YWE2ZTQ2MDk4ZDFiMjMzNTEwNmjUH6E=: ]] 01:26:52.301 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDk2ZWRhMGY5MGQ4MmY1MTM4MzhkNzk3YjQzM2I1YzcxNDhjMjU0YzA1YzE3YWE2ZTQ2MDk4ZDFiMjMzNTEwNmjUH6E=: 01:26:52.301 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 01:26:52.301 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:26:52.301 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:26:52.301 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:26:52.301 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:26:52.301 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:26:52.301 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:26:52.301 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:52.301 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:52.301 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:52.301 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:26:52.301 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:26:52.301 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:52.301 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:52.301 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:52.301 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:52.301 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:52.301 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:52.301 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:52.301 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:52.301 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:52.301 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:26:52.301 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:52.301 05:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:52.869 nvme0n1 01:26:52.869 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:52.869 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:26:52.869 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:26:52.869 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:52.869 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:52.869 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:52.869 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:26:52.869 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:26:52.869 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:52.869 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:52.869 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:52.869 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:26:52.869 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 01:26:52.869 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:26:52.869 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:26:52.869 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:26:52.869 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:26:52.869 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGNmZDA2YTliNzIxYWZiMGIxOWI3MjcwNDIzNGQwYjA0OTllM2MxMWJlNDk1ODIw3m40Mg==: 01:26:52.869 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2UyMjIwODI3YjNmZGU5N2U5M2I0YTQxMDYzYWVkZmQ3ZGQxNWIzMGZkNDM1NTA3Mp/OWA==: 01:26:52.869 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:26:52.869 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:26:52.869 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGNmZDA2YTliNzIxYWZiMGIxOWI3MjcwNDIzNGQwYjA0OTllM2MxMWJlNDk1ODIw3m40Mg==: 01:26:52.869 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2UyMjIwODI3YjNmZGU5N2U5M2I0YTQxMDYzYWVkZmQ3ZGQxNWIzMGZkNDM1NTA3Mp/OWA==: ]] 01:26:52.869 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2UyMjIwODI3YjNmZGU5N2U5M2I0YTQxMDYzYWVkZmQ3ZGQxNWIzMGZkNDM1NTA3Mp/OWA==: 01:26:52.869 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 01:26:52.869 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:26:52.869 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:26:52.869 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:26:52.869 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:26:52.869 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:26:52.869 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:26:52.869 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:52.869 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:52.869 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:52.869 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:26:52.869 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:26:52.869 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:52.869 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:52.869 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:52.869 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:52.869 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:52.869 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:52.869 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:52.869 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:52.869 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:52.869 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:26:52.869 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:52.869 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:53.128 nvme0n1 01:26:53.128 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:53.128 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:26:53.128 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:53.128 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:26:53.128 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:53.128 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:53.128 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:26:53.128 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:26:53.128 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:53.128 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:53.128 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:53.128 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:26:53.128 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 01:26:53.128 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:26:53.128 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:26:53.128 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:26:53.128 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:26:53.128 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YThlMDI0MmFhMDAzZjc3ZGI5NjI4NGYxOTNhMzdmMmLmgnvi: 01:26:53.128 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGE4ODRiZDgwY2VlZjM4NDFhMDE2ZTYyMzkzMTYyYzakYML0: 01:26:53.128 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:26:53.128 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:26:53.128 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YThlMDI0MmFhMDAzZjc3ZGI5NjI4NGYxOTNhMzdmMmLmgnvi: 01:26:53.128 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGE4ODRiZDgwY2VlZjM4NDFhMDE2ZTYyMzkzMTYyYzakYML0: ]] 01:26:53.128 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGE4ODRiZDgwY2VlZjM4NDFhMDE2ZTYyMzkzMTYyYzakYML0: 01:26:53.128 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 01:26:53.128 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:26:53.128 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:26:53.128 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:26:53.128 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:26:53.128 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:26:53.128 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:26:53.128 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:53.128 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:53.128 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:53.128 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:26:53.128 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:26:53.128 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:53.128 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:53.128 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:53.128 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:53.128 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:53.128 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:53.128 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:53.128 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:53.128 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:53.128 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:26:53.128 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:53.128 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:53.696 nvme0n1 01:26:53.696 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:53.696 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:26:53.696 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:26:53.696 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:53.696 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:53.696 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:53.696 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:26:53.696 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:26:53.696 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:53.696 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:53.696 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:53.696 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:26:53.696 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 01:26:53.696 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:26:53.696 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:26:53.696 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:26:53.696 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:26:53.696 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjYwYjhhNzk3M2VjM2Q5ZWQ0YTRlMWY4MzlhN2UyZTNkMWYzMmFiYjI5YTc5MDdhoHn+8w==: 01:26:53.696 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTIzNDE3ODdjMDRkMzM2MGJkNmE1YTQwN2UzMGMzYmYMWQfu: 01:26:53.696 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:26:53.696 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:26:53.696 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjYwYjhhNzk3M2VjM2Q5ZWQ0YTRlMWY4MzlhN2UyZTNkMWYzMmFiYjI5YTc5MDdhoHn+8w==: 01:26:53.696 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTIzNDE3ODdjMDRkMzM2MGJkNmE1YTQwN2UzMGMzYmYMWQfu: ]] 01:26:53.696 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTIzNDE3ODdjMDRkMzM2MGJkNmE1YTQwN2UzMGMzYmYMWQfu: 01:26:53.696 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 01:26:53.696 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:26:53.696 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:26:53.696 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:26:53.696 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:26:53.696 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:26:53.696 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:26:53.696 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:53.696 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:53.696 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:53.696 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:26:53.696 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:26:53.696 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:53.696 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:53.696 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:53.696 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:53.696 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:53.696 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:53.696 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:53.696 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:53.696 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:53.696 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:26:53.696 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:53.696 05:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:53.955 nvme0n1 01:26:53.955 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:53.955 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:26:53.955 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:26:53.955 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:53.955 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:53.955 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:53.955 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:26:53.955 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:26:53.955 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:53.955 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:53.955 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:53.955 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:26:53.955 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 01:26:53.955 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:26:53.955 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:26:53.955 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:26:53.955 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:26:53.955 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmViMDJhYTE4YWVmMjNkYzgzMjhiOGNmNDJhNzM3ZGM3YTljOTc1Mzg1ZDc5MzcyMTdkMDUxMDdjN2YwOGM0MjfIqi4=: 01:26:53.955 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:26:53.955 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:26:53.955 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:26:53.955 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmViMDJhYTE4YWVmMjNkYzgzMjhiOGNmNDJhNzM3ZGM3YTljOTc1Mzg1ZDc5MzcyMTdkMDUxMDdjN2YwOGM0MjfIqi4=: 01:26:53.955 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:26:53.955 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 01:26:53.955 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:26:53.955 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:26:53.955 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:26:53.955 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:26:53.955 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:26:53.955 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:26:53.955 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:53.955 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:53.955 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:53.955 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:26:53.955 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:26:53.955 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:53.955 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:53.955 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:53.955 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:53.955 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:53.955 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:53.955 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:53.955 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:53.955 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:53.955 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:26:53.955 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:53.955 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:54.521 nvme0n1 01:26:54.521 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:54.521 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:26:54.521 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:54.521 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:26:54.521 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:54.521 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:54.521 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:26:54.521 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:26:54.521 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:54.521 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:54.521 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:54.521 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:26:54.521 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:26:54.521 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 01:26:54.521 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:26:54.521 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:26:54.521 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:26:54.521 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:26:54.521 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDg5OGQ1NDhjNTUyM2QzZGE0YzE2YzY0MGYwYmQ4NTiu9ens: 01:26:54.521 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDk2ZWRhMGY5MGQ4MmY1MTM4MzhkNzk3YjQzM2I1YzcxNDhjMjU0YzA1YzE3YWE2ZTQ2MDk4ZDFiMjMzNTEwNmjUH6E=: 01:26:54.521 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:26:54.521 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:26:54.521 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDg5OGQ1NDhjNTUyM2QzZGE0YzE2YzY0MGYwYmQ4NTiu9ens: 01:26:54.521 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDk2ZWRhMGY5MGQ4MmY1MTM4MzhkNzk3YjQzM2I1YzcxNDhjMjU0YzA1YzE3YWE2ZTQ2MDk4ZDFiMjMzNTEwNmjUH6E=: ]] 01:26:54.521 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDk2ZWRhMGY5MGQ4MmY1MTM4MzhkNzk3YjQzM2I1YzcxNDhjMjU0YzA1YzE3YWE2ZTQ2MDk4ZDFiMjMzNTEwNmjUH6E=: 01:26:54.521 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 01:26:54.521 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:26:54.521 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:26:54.521 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:26:54.521 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:26:54.521 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:26:54.521 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:26:54.521 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:54.521 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:54.521 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:54.521 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:26:54.521 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:26:54.521 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:54.521 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:54.521 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:54.521 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:54.521 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:54.521 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:54.521 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:54.521 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:54.521 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:54.521 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:26:54.521 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:54.521 05:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:55.087 nvme0n1 01:26:55.087 05:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:55.087 05:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:26:55.087 05:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:26:55.087 05:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:55.087 05:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:55.087 05:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:55.087 05:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:26:55.087 05:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:26:55.087 05:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:55.087 05:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:55.087 05:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:55.087 05:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:26:55.087 05:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 01:26:55.087 05:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:26:55.087 05:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:26:55.087 05:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:26:55.087 05:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:26:55.087 05:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGNmZDA2YTliNzIxYWZiMGIxOWI3MjcwNDIzNGQwYjA0OTllM2MxMWJlNDk1ODIw3m40Mg==: 01:26:55.087 05:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2UyMjIwODI3YjNmZGU5N2U5M2I0YTQxMDYzYWVkZmQ3ZGQxNWIzMGZkNDM1NTA3Mp/OWA==: 01:26:55.087 05:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:26:55.087 05:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:26:55.087 05:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGNmZDA2YTliNzIxYWZiMGIxOWI3MjcwNDIzNGQwYjA0OTllM2MxMWJlNDk1ODIw3m40Mg==: 01:26:55.087 05:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2UyMjIwODI3YjNmZGU5N2U5M2I0YTQxMDYzYWVkZmQ3ZGQxNWIzMGZkNDM1NTA3Mp/OWA==: ]] 01:26:55.087 05:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2UyMjIwODI3YjNmZGU5N2U5M2I0YTQxMDYzYWVkZmQ3ZGQxNWIzMGZkNDM1NTA3Mp/OWA==: 01:26:55.087 05:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 01:26:55.087 05:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:26:55.087 05:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:26:55.087 05:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:26:55.087 05:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:26:55.087 05:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:26:55.087 05:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:26:55.087 05:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:55.087 05:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:55.087 05:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:55.087 05:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:26:55.087 05:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:26:55.087 05:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:55.087 05:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:55.087 05:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:55.087 05:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:55.087 05:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:55.087 05:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:55.087 05:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:55.087 05:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:55.087 05:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:55.087 05:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:26:55.087 05:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:55.087 05:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:55.655 nvme0n1 01:26:55.655 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:55.655 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:26:55.655 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:26:55.655 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:55.655 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:55.655 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:55.655 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:26:55.655 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:26:55.655 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:55.655 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:55.655 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:55.655 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:26:55.655 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 01:26:55.655 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:26:55.655 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:26:55.655 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:26:55.655 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:26:55.655 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YThlMDI0MmFhMDAzZjc3ZGI5NjI4NGYxOTNhMzdmMmLmgnvi: 01:26:55.655 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGE4ODRiZDgwY2VlZjM4NDFhMDE2ZTYyMzkzMTYyYzakYML0: 01:26:55.655 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:26:55.655 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:26:55.655 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YThlMDI0MmFhMDAzZjc3ZGI5NjI4NGYxOTNhMzdmMmLmgnvi: 01:26:55.655 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGE4ODRiZDgwY2VlZjM4NDFhMDE2ZTYyMzkzMTYyYzakYML0: ]] 01:26:55.655 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGE4ODRiZDgwY2VlZjM4NDFhMDE2ZTYyMzkzMTYyYzakYML0: 01:26:55.655 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 01:26:55.655 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:26:55.655 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:26:55.914 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:26:55.914 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:26:55.914 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:26:55.914 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:26:55.914 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:55.914 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:55.914 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:55.914 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:26:55.914 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:26:55.914 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:55.914 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:55.914 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:55.914 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:55.914 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:55.914 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:55.914 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:55.914 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:55.914 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:55.914 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:26:55.914 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:55.914 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:56.481 nvme0n1 01:26:56.481 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:56.481 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:26:56.481 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:26:56.481 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:56.481 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:56.481 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:56.481 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:26:56.481 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:26:56.481 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:56.481 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:56.481 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:56.481 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:26:56.481 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 01:26:56.481 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:26:56.481 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:26:56.481 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:26:56.481 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:26:56.481 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjYwYjhhNzk3M2VjM2Q5ZWQ0YTRlMWY4MzlhN2UyZTNkMWYzMmFiYjI5YTc5MDdhoHn+8w==: 01:26:56.481 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTIzNDE3ODdjMDRkMzM2MGJkNmE1YTQwN2UzMGMzYmYMWQfu: 01:26:56.481 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:26:56.481 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:26:56.481 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjYwYjhhNzk3M2VjM2Q5ZWQ0YTRlMWY4MzlhN2UyZTNkMWYzMmFiYjI5YTc5MDdhoHn+8w==: 01:26:56.481 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTIzNDE3ODdjMDRkMzM2MGJkNmE1YTQwN2UzMGMzYmYMWQfu: ]] 01:26:56.481 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTIzNDE3ODdjMDRkMzM2MGJkNmE1YTQwN2UzMGMzYmYMWQfu: 01:26:56.481 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 01:26:56.481 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:26:56.481 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:26:56.481 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:26:56.481 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:26:56.481 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:26:56.481 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:26:56.481 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:56.481 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:56.481 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:56.481 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:26:56.481 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:26:56.481 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:56.481 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:56.481 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:56.481 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:56.481 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:56.481 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:56.481 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:56.481 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:56.481 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:56.481 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:26:56.481 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:56.481 05:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:57.047 nvme0n1 01:26:57.047 05:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:57.047 05:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:26:57.047 05:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:26:57.047 05:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:57.047 05:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:57.047 05:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:57.047 05:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:26:57.047 05:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:26:57.047 05:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:57.047 05:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:57.047 05:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:57.047 05:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:26:57.047 05:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 01:26:57.047 05:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:26:57.047 05:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:26:57.047 05:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:26:57.047 05:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:26:57.047 05:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmViMDJhYTE4YWVmMjNkYzgzMjhiOGNmNDJhNzM3ZGM3YTljOTc1Mzg1ZDc5MzcyMTdkMDUxMDdjN2YwOGM0MjfIqi4=: 01:26:57.047 05:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:26:57.047 05:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:26:57.047 05:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:26:57.047 05:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmViMDJhYTE4YWVmMjNkYzgzMjhiOGNmNDJhNzM3ZGM3YTljOTc1Mzg1ZDc5MzcyMTdkMDUxMDdjN2YwOGM0MjfIqi4=: 01:26:57.047 05:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:26:57.047 05:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 01:26:57.047 05:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:26:57.047 05:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:26:57.047 05:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:26:57.047 05:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:26:57.047 05:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:26:57.047 05:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:26:57.047 05:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:57.047 05:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:57.047 05:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:57.047 05:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:26:57.047 05:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:26:57.047 05:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:57.047 05:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:57.047 05:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:57.047 05:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:57.047 05:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:57.047 05:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:57.047 05:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:57.047 05:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:57.047 05:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:57.047 05:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:26:57.047 05:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:57.047 05:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:57.614 nvme0n1 01:26:57.614 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:57.614 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:26:57.614 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:26:57.614 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:57.614 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:57.614 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:57.614 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:26:57.614 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:26:57.614 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:57.614 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:57.877 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:57.877 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 01:26:57.877 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:26:57.877 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:26:57.877 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:26:57.877 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:26:57.877 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGNmZDA2YTliNzIxYWZiMGIxOWI3MjcwNDIzNGQwYjA0OTllM2MxMWJlNDk1ODIw3m40Mg==: 01:26:57.877 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2UyMjIwODI3YjNmZGU5N2U5M2I0YTQxMDYzYWVkZmQ3ZGQxNWIzMGZkNDM1NTA3Mp/OWA==: 01:26:57.877 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:26:57.877 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:26:57.877 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGNmZDA2YTliNzIxYWZiMGIxOWI3MjcwNDIzNGQwYjA0OTllM2MxMWJlNDk1ODIw3m40Mg==: 01:26:57.877 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2UyMjIwODI3YjNmZGU5N2U5M2I0YTQxMDYzYWVkZmQ3ZGQxNWIzMGZkNDM1NTA3Mp/OWA==: ]] 01:26:57.877 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2UyMjIwODI3YjNmZGU5N2U5M2I0YTQxMDYzYWVkZmQ3ZGQxNWIzMGZkNDM1NTA3Mp/OWA==: 01:26:57.877 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:26:57.877 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:57.877 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:57.877 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:57.877 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 01:26:57.877 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:26:57.877 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:57.877 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:57.877 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:57.877 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:57.877 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:57.877 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:57.877 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:57.877 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:57.877 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:57.877 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 01:26:57.877 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 01:26:57.877 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 01:26:57.877 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:26:57.877 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:26:57.877 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:26:57.877 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:26:57.877 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 01:26:57.877 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:57.877 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:57.877 request: 01:26:57.877 { 01:26:57.877 "name": "nvme0", 01:26:57.877 "trtype": "tcp", 01:26:57.877 "traddr": "10.0.0.1", 01:26:57.877 "adrfam": "ipv4", 01:26:57.877 "trsvcid": "4420", 01:26:57.877 "subnqn": "nqn.2024-02.io.spdk:cnode0", 01:26:57.877 "hostnqn": "nqn.2024-02.io.spdk:host0", 01:26:57.877 "prchk_reftag": false, 01:26:57.877 "prchk_guard": false, 01:26:57.877 "hdgst": false, 01:26:57.877 "ddgst": false, 01:26:57.877 "allow_unrecognized_csi": false, 01:26:57.877 "method": "bdev_nvme_attach_controller", 01:26:57.877 "req_id": 1 01:26:57.877 } 01:26:57.877 Got JSON-RPC error response 01:26:57.877 response: 01:26:57.877 { 01:26:57.877 "code": -5, 01:26:57.877 "message": "Input/output error" 01:26:57.877 } 01:26:57.877 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:26:57.877 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 01:26:57.877 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:26:57.877 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:26:57.877 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:26:57.877 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 01:26:57.877 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:57.877 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:57.877 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 01:26:57.877 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:57.877 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 01:26:57.877 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 01:26:57.877 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:26:57.877 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:57.877 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:57.877 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:57.877 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:57.877 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:57.877 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:57.877 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:57.877 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:57.877 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:57.877 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 01:26:57.877 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 01:26:57.877 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 01:26:57.877 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:26:57.877 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:26:57.877 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:26:57.877 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:26:57.877 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 01:26:57.877 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:57.877 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:57.877 request: 01:26:57.877 { 01:26:57.877 "name": "nvme0", 01:26:57.877 "trtype": "tcp", 01:26:57.877 "traddr": "10.0.0.1", 01:26:57.877 "adrfam": "ipv4", 01:26:57.877 "trsvcid": "4420", 01:26:57.877 "subnqn": "nqn.2024-02.io.spdk:cnode0", 01:26:57.877 "hostnqn": "nqn.2024-02.io.spdk:host0", 01:26:57.877 "prchk_reftag": false, 01:26:57.877 "prchk_guard": false, 01:26:57.877 "hdgst": false, 01:26:57.877 "ddgst": false, 01:26:57.877 "dhchap_key": "key2", 01:26:57.877 "allow_unrecognized_csi": false, 01:26:57.877 "method": "bdev_nvme_attach_controller", 01:26:57.877 "req_id": 1 01:26:57.877 } 01:26:57.877 Got JSON-RPC error response 01:26:57.877 response: 01:26:57.877 { 01:26:57.877 "code": -5, 01:26:57.877 "message": "Input/output error" 01:26:57.877 } 01:26:57.877 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:26:57.877 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 01:26:57.877 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:26:57.877 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:26:57.877 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:26:57.877 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 01:26:57.877 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:57.877 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 01:26:57.877 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:57.877 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:57.877 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 01:26:57.878 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 01:26:57.878 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:26:57.878 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:57.878 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:57.878 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:57.878 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:57.878 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:57.878 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:57.878 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:57.878 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:57.878 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:57.878 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 01:26:57.878 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 01:26:57.878 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 01:26:57.878 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:26:57.878 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:26:57.878 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:26:57.878 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:26:57.878 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 01:26:57.878 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:57.878 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:57.878 request: 01:26:57.878 { 01:26:58.184 "name": "nvme0", 01:26:58.184 "trtype": "tcp", 01:26:58.184 "traddr": "10.0.0.1", 01:26:58.184 "adrfam": "ipv4", 01:26:58.184 "trsvcid": "4420", 01:26:58.184 "subnqn": "nqn.2024-02.io.spdk:cnode0", 01:26:58.184 "hostnqn": "nqn.2024-02.io.spdk:host0", 01:26:58.184 "prchk_reftag": false, 01:26:58.184 "prchk_guard": false, 01:26:58.184 "hdgst": false, 01:26:58.184 "ddgst": false, 01:26:58.184 "dhchap_key": "key1", 01:26:58.184 "dhchap_ctrlr_key": "ckey2", 01:26:58.184 "allow_unrecognized_csi": false, 01:26:58.184 "method": "bdev_nvme_attach_controller", 01:26:58.184 "req_id": 1 01:26:58.184 } 01:26:58.184 Got JSON-RPC error response 01:26:58.184 response: 01:26:58.184 { 01:26:58.184 "code": -5, 01:26:58.184 "message": "Input/output error" 01:26:58.184 } 01:26:58.184 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:26:58.184 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 01:26:58.184 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:26:58.184 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:26:58.184 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:26:58.184 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 01:26:58.184 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:26:58.184 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:58.184 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:58.184 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:58.184 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:58.184 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:58.184 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:58.184 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:58.184 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:58.184 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:58.184 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 01:26:58.184 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:58.184 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:58.184 nvme0n1 01:26:58.184 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:58.184 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 01:26:58.184 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:26:58.184 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:26:58.184 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:26:58.184 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:26:58.184 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YThlMDI0MmFhMDAzZjc3ZGI5NjI4NGYxOTNhMzdmMmLmgnvi: 01:26:58.184 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGE4ODRiZDgwY2VlZjM4NDFhMDE2ZTYyMzkzMTYyYzakYML0: 01:26:58.184 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:26:58.184 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:26:58.184 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YThlMDI0MmFhMDAzZjc3ZGI5NjI4NGYxOTNhMzdmMmLmgnvi: 01:26:58.184 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGE4ODRiZDgwY2VlZjM4NDFhMDE2ZTYyMzkzMTYyYzakYML0: ]] 01:26:58.184 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGE4ODRiZDgwY2VlZjM4NDFhMDE2ZTYyMzkzMTYyYzakYML0: 01:26:58.184 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:26:58.184 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:58.184 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:58.184 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:58.184 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 01:26:58.184 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 01:26:58.184 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:58.184 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:58.184 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:58.184 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:26:58.184 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 01:26:58.184 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 01:26:58.184 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 01:26:58.184 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:26:58.184 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:26:58.184 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:26:58.184 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:26:58.184 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 01:26:58.184 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:58.184 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:58.184 request: 01:26:58.184 { 01:26:58.184 "name": "nvme0", 01:26:58.184 "dhchap_key": "key1", 01:26:58.184 "dhchap_ctrlr_key": "ckey2", 01:26:58.184 "method": "bdev_nvme_set_keys", 01:26:58.184 "req_id": 1 01:26:58.184 } 01:26:58.184 Got JSON-RPC error response 01:26:58.184 response: 01:26:58.184 { 01:26:58.184 "code": -13, 01:26:58.184 "message": "Permission denied" 01:26:58.184 } 01:26:58.184 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:26:58.184 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 01:26:58.184 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:26:58.184 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:26:58.184 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:26:58.184 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 01:26:58.184 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 01:26:58.184 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:58.184 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:58.184 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:58.184 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 01:26:58.184 05:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 01:26:59.566 05:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 01:26:59.566 05:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 01:26:59.566 05:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:59.566 05:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:59.566 05:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:59.566 05:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 01:26:59.566 05:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 01:26:59.566 05:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:26:59.566 05:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:26:59.566 05:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:26:59.566 05:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:26:59.566 05:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGNmZDA2YTliNzIxYWZiMGIxOWI3MjcwNDIzNGQwYjA0OTllM2MxMWJlNDk1ODIw3m40Mg==: 01:26:59.566 05:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2UyMjIwODI3YjNmZGU5N2U5M2I0YTQxMDYzYWVkZmQ3ZGQxNWIzMGZkNDM1NTA3Mp/OWA==: 01:26:59.566 05:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:26:59.566 05:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:26:59.566 05:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGNmZDA2YTliNzIxYWZiMGIxOWI3MjcwNDIzNGQwYjA0OTllM2MxMWJlNDk1ODIw3m40Mg==: 01:26:59.566 05:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2UyMjIwODI3YjNmZGU5N2U5M2I0YTQxMDYzYWVkZmQ3ZGQxNWIzMGZkNDM1NTA3Mp/OWA==: ]] 01:26:59.566 05:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2UyMjIwODI3YjNmZGU5N2U5M2I0YTQxMDYzYWVkZmQ3ZGQxNWIzMGZkNDM1NTA3Mp/OWA==: 01:26:59.566 05:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 01:26:59.566 05:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:26:59.566 05:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:26:59.566 05:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:26:59.566 05:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:26:59.566 05:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:26:59.566 05:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:26:59.566 05:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:26:59.566 05:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:26:59.566 05:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:26:59.566 05:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:26:59.566 05:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 01:26:59.566 05:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:59.566 05:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:59.566 nvme0n1 01:26:59.566 05:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:59.566 05:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 01:26:59.566 05:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:26:59.566 05:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:26:59.566 05:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:26:59.566 05:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:26:59.566 05:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YThlMDI0MmFhMDAzZjc3ZGI5NjI4NGYxOTNhMzdmMmLmgnvi: 01:26:59.566 05:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGE4ODRiZDgwY2VlZjM4NDFhMDE2ZTYyMzkzMTYyYzakYML0: 01:26:59.566 05:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:26:59.566 05:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:26:59.566 05:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YThlMDI0MmFhMDAzZjc3ZGI5NjI4NGYxOTNhMzdmMmLmgnvi: 01:26:59.566 05:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGE4ODRiZDgwY2VlZjM4NDFhMDE2ZTYyMzkzMTYyYzakYML0: ]] 01:26:59.566 05:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGE4ODRiZDgwY2VlZjM4NDFhMDE2ZTYyMzkzMTYyYzakYML0: 01:26:59.566 05:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 01:26:59.566 05:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 01:26:59.566 05:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 01:26:59.566 05:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:26:59.566 05:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:26:59.566 05:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:26:59.566 05:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:26:59.566 05:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 01:26:59.566 05:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:59.566 05:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:59.566 request: 01:26:59.566 { 01:26:59.566 "name": "nvme0", 01:26:59.567 "dhchap_key": "key2", 01:26:59.567 "dhchap_ctrlr_key": "ckey1", 01:26:59.567 "method": "bdev_nvme_set_keys", 01:26:59.567 "req_id": 1 01:26:59.567 } 01:26:59.567 Got JSON-RPC error response 01:26:59.567 response: 01:26:59.567 { 01:26:59.567 "code": -13, 01:26:59.567 "message": "Permission denied" 01:26:59.567 } 01:26:59.567 05:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:26:59.567 05:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 01:26:59.567 05:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:26:59.567 05:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:26:59.567 05:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:26:59.567 05:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 01:26:59.567 05:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 01:26:59.567 05:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:26:59.567 05:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:26:59.567 05:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:26:59.567 05:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 01:26:59.567 05:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 01:27:00.503 05:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 01:27:00.503 05:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 01:27:00.503 05:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:00.503 05:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:27:00.503 05:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:00.503 05:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 01:27:00.503 05:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 01:27:00.503 05:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 01:27:00.503 05:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 01:27:00.503 05:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 01:27:00.503 05:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 01:27:00.503 05:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:27:00.503 05:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 01:27:00.503 05:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 01:27:00.503 05:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:27:00.503 rmmod nvme_tcp 01:27:00.503 rmmod nvme_fabrics 01:27:00.762 05:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:27:00.762 05:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 01:27:00.762 05:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 01:27:00.762 05:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 78202 ']' 01:27:00.762 05:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 78202 01:27:00.762 05:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 78202 ']' 01:27:00.762 05:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 78202 01:27:00.762 05:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 01:27:00.762 05:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:27:00.762 05:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78202 01:27:00.762 killing process with pid 78202 01:27:00.762 05:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:27:00.762 05:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:27:00.762 05:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78202' 01:27:00.762 05:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 78202 01:27:00.762 05:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 78202 01:27:01.020 05:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:27:01.020 05:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:27:01.020 05:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:27:01.020 05:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 01:27:01.020 05:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 01:27:01.020 05:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 01:27:01.020 05:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:27:01.020 05:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:27:01.020 05:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:27:01.020 05:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:27:01.020 05:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:27:01.020 05:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:27:01.020 05:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:27:01.020 05:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:27:01.020 05:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:27:01.020 05:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:27:01.020 05:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:27:01.020 05:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:27:01.020 05:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:27:01.020 05:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:27:01.278 05:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:27:01.278 05:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:27:01.278 05:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 01:27:01.278 05:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:27:01.278 05:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:27:01.278 05:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:27:01.278 05:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 01:27:01.278 05:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 01:27:01.278 05:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 01:27:01.278 05:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 01:27:01.278 05:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 01:27:01.278 05:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 01:27:01.278 05:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 01:27:01.278 05:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 01:27:01.278 05:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 01:27:01.278 05:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 01:27:01.278 05:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 01:27:01.278 05:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 01:27:01.278 05:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:27:02.212 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:27:02.212 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 01:27:02.212 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 01:27:02.212 05:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.CKZ /tmp/spdk.key-null.OmX /tmp/spdk.key-sha256.unW /tmp/spdk.key-sha384.6Cm /tmp/spdk.key-sha512.1cs /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 01:27:02.212 05:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:27:02.777 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:27:02.777 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 01:27:02.777 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 01:27:02.777 01:27:02.777 real 0m37.117s 01:27:02.777 user 0m33.987s 01:27:02.777 sys 0m5.012s 01:27:02.777 ************************************ 01:27:02.777 END TEST nvmf_auth_host 01:27:02.777 ************************************ 01:27:02.777 05:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 01:27:02.777 05:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:27:03.034 05:21:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 01:27:03.034 05:21:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 01:27:03.034 05:21:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:27:03.034 05:21:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 01:27:03.034 05:21:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 01:27:03.034 ************************************ 01:27:03.034 START TEST nvmf_digest 01:27:03.034 ************************************ 01:27:03.034 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 01:27:03.034 * Looking for test storage... 01:27:03.034 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:27:03.034 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:27:03.034 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 01:27:03.034 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:27:03.034 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:27:03.034 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:27:03.034 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 01:27:03.034 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 01:27:03.034 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 01:27:03.034 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 01:27:03.034 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 01:27:03.034 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 01:27:03.034 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 01:27:03.034 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 01:27:03.034 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 01:27:03.035 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:27:03.035 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 01:27:03.035 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 01:27:03.035 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 01:27:03.035 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:27:03.035 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 01:27:03.035 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 01:27:03.035 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:27:03.035 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 01:27:03.035 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 01:27:03.035 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 01:27:03.035 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 01:27:03.035 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:27:03.035 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 01:27:03.035 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 01:27:03.035 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:27:03.035 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:27:03.035 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 01:27:03.035 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:27:03.035 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:27:03.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:27:03.035 --rc genhtml_branch_coverage=1 01:27:03.035 --rc genhtml_function_coverage=1 01:27:03.035 --rc genhtml_legend=1 01:27:03.035 --rc geninfo_all_blocks=1 01:27:03.035 --rc geninfo_unexecuted_blocks=1 01:27:03.035 01:27:03.035 ' 01:27:03.035 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:27:03.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:27:03.035 --rc genhtml_branch_coverage=1 01:27:03.035 --rc genhtml_function_coverage=1 01:27:03.035 --rc genhtml_legend=1 01:27:03.035 --rc geninfo_all_blocks=1 01:27:03.035 --rc geninfo_unexecuted_blocks=1 01:27:03.035 01:27:03.035 ' 01:27:03.035 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:27:03.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:27:03.035 --rc genhtml_branch_coverage=1 01:27:03.035 --rc genhtml_function_coverage=1 01:27:03.035 --rc genhtml_legend=1 01:27:03.035 --rc geninfo_all_blocks=1 01:27:03.035 --rc geninfo_unexecuted_blocks=1 01:27:03.035 01:27:03.035 ' 01:27:03.035 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:27:03.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:27:03.035 --rc genhtml_branch_coverage=1 01:27:03.035 --rc genhtml_function_coverage=1 01:27:03.035 --rc genhtml_legend=1 01:27:03.035 --rc geninfo_all_blocks=1 01:27:03.035 --rc geninfo_unexecuted_blocks=1 01:27:03.035 01:27:03.035 ' 01:27:03.035 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:27:03.035 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 01:27:03.035 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:27:03.035 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:27:03.035 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:27:03.035 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:27:03.035 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:27:03.035 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:27:03.035 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:27:03.035 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:27:03.035 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:27:03.035 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:27:03.035 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:27:03.035 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:27:03.035 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:27:03.035 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:27:03.035 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:27:03.035 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:27:03.035 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:27:03.035 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 01:27:03.294 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:27:03.294 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:27:03.294 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:27:03.294 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:27:03.294 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:27:03.294 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:27:03.294 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 01:27:03.294 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:27:03.294 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 01:27:03.294 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:27:03.294 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:27:03.294 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:27:03.294 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:27:03.294 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:27:03.294 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:27:03.294 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:27:03.294 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:27:03.294 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:27:03.294 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 01:27:03.294 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 01:27:03.294 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 01:27:03.294 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 01:27:03.294 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 01:27:03.294 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 01:27:03.294 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:27:03.294 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:27:03.294 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 01:27:03.294 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 01:27:03.294 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 01:27:03.294 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:27:03.294 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:27:03.294 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:27:03.294 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:27:03.294 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:27:03.294 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:27:03.294 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:27:03.294 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:27:03.294 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@460 -- # nvmf_veth_init 01:27:03.294 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:27:03.294 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:27:03.294 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:27:03.294 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:27:03.294 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:27:03.294 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:27:03.294 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:27:03.294 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:27:03.294 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:27:03.294 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:27:03.294 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:27:03.294 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:27:03.294 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:27:03.294 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:27:03.294 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:27:03.294 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:27:03.294 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:27:03.294 Cannot find device "nvmf_init_br" 01:27:03.294 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 01:27:03.294 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:27:03.294 Cannot find device "nvmf_init_br2" 01:27:03.294 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 01:27:03.295 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:27:03.295 Cannot find device "nvmf_tgt_br" 01:27:03.295 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 01:27:03.295 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:27:03.295 Cannot find device "nvmf_tgt_br2" 01:27:03.295 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 01:27:03.295 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:27:03.295 Cannot find device "nvmf_init_br" 01:27:03.295 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 01:27:03.295 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:27:03.295 Cannot find device "nvmf_init_br2" 01:27:03.295 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 01:27:03.295 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:27:03.295 Cannot find device "nvmf_tgt_br" 01:27:03.295 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 01:27:03.295 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:27:03.295 Cannot find device "nvmf_tgt_br2" 01:27:03.295 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 01:27:03.295 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:27:03.295 Cannot find device "nvmf_br" 01:27:03.295 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 01:27:03.295 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:27:03.295 Cannot find device "nvmf_init_if" 01:27:03.295 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 01:27:03.295 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:27:03.295 Cannot find device "nvmf_init_if2" 01:27:03.295 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 01:27:03.295 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:27:03.295 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:27:03.295 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 01:27:03.295 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:27:03.295 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:27:03.295 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 01:27:03.295 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:27:03.295 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:27:03.295 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:27:03.295 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:27:03.295 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:27:03.295 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:27:03.295 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:27:03.553 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:27:03.553 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:27:03.553 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:27:03.553 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:27:03.553 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:27:03.553 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:27:03.553 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:27:03.553 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:27:03.553 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:27:03.553 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:27:03.553 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:27:03.553 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:27:03.553 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:27:03.553 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:27:03.553 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:27:03.553 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:27:03.553 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:27:03.553 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:27:03.553 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:27:03.553 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:27:03.553 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:27:03.553 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:27:03.553 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:27:03.553 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:27:03.554 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:27:03.554 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:27:03.554 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:27:03.554 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.114 ms 01:27:03.554 01:27:03.554 --- 10.0.0.3 ping statistics --- 01:27:03.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:27:03.554 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 01:27:03.554 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:27:03.554 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:27:03.554 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.092 ms 01:27:03.554 01:27:03.554 --- 10.0.0.4 ping statistics --- 01:27:03.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:27:03.554 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 01:27:03.554 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:27:03.554 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:27:03.554 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 01:27:03.554 01:27:03.554 --- 10.0.0.1 ping statistics --- 01:27:03.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:27:03.554 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 01:27:03.554 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:27:03.554 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:27:03.554 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 01:27:03.554 01:27:03.554 --- 10.0.0.2 ping statistics --- 01:27:03.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:27:03.554 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 01:27:03.554 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:27:03.554 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@461 -- # return 0 01:27:03.554 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:27:03.554 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:27:03.554 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:27:03.554 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:27:03.554 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:27:03.554 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:27:03.554 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:27:03.554 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 01:27:03.554 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 01:27:03.554 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 01:27:03.554 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:27:03.554 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 01:27:03.554 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 01:27:03.554 ************************************ 01:27:03.554 START TEST nvmf_digest_clean 01:27:03.554 ************************************ 01:27:03.554 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 01:27:03.554 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 01:27:03.554 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 01:27:03.554 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 01:27:03.554 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 01:27:03.554 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 01:27:03.554 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:27:03.554 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 01:27:03.554 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 01:27:03.554 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=79855 01:27:03.554 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 01:27:03.554 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 79855 01:27:03.554 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 79855 ']' 01:27:03.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:27:03.554 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:27:03.554 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 01:27:03.554 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:27:03.554 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 01:27:03.554 05:21:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 01:27:03.812 [2024-12-09 05:21:46.014509] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:27:03.812 [2024-12-09 05:21:46.014563] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:27:03.813 [2024-12-09 05:21:46.164472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:27:03.813 [2024-12-09 05:21:46.209420] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:27:03.813 [2024-12-09 05:21:46.209523] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:27:03.813 [2024-12-09 05:21:46.209533] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:27:03.813 [2024-12-09 05:21:46.209538] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:27:03.813 [2024-12-09 05:21:46.209542] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:27:03.813 [2024-12-09 05:21:46.209838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:27:04.746 05:21:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:27:04.746 05:21:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 01:27:04.747 05:21:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:27:04.747 05:21:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 01:27:04.747 05:21:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 01:27:04.747 05:21:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:27:04.747 05:21:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 01:27:04.747 05:21:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 01:27:04.747 05:21:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 01:27:04.747 05:21:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:04.747 05:21:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 01:27:04.747 [2024-12-09 05:21:46.953589] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:27:04.747 null0 01:27:04.747 [2024-12-09 05:21:46.999990] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:27:04.747 [2024-12-09 05:21:47.028047] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:27:04.747 05:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:04.747 05:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 01:27:04.747 05:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 01:27:04.747 05:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 01:27:04.747 05:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 01:27:04.747 05:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 01:27:04.747 05:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 01:27:04.747 05:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 01:27:04.747 05:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79887 01:27:04.747 05:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 01:27:04.747 05:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79887 /var/tmp/bperf.sock 01:27:04.747 05:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 79887 ']' 01:27:04.747 05:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 01:27:04.747 05:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 01:27:04.747 05:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:27:04.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:27:04.747 05:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 01:27:04.747 05:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 01:27:04.747 [2024-12-09 05:21:47.086028] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:27:04.747 [2024-12-09 05:21:47.086170] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79887 ] 01:27:05.005 [2024-12-09 05:21:47.214097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:27:05.005 [2024-12-09 05:21:47.269498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:27:05.571 05:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:27:05.571 05:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 01:27:05.571 05:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 01:27:05.571 05:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 01:27:05.571 05:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 01:27:05.829 [2024-12-09 05:21:48.221388] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:27:05.829 05:21:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:27:05.829 05:21:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:27:06.394 nvme0n1 01:27:06.394 05:21:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 01:27:06.394 05:21:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:27:06.394 Running I/O for 2 seconds... 01:27:08.303 18542.00 IOPS, 72.43 MiB/s [2024-12-09T05:21:50.759Z] 18796.00 IOPS, 73.42 MiB/s 01:27:08.303 Latency(us) 01:27:08.303 [2024-12-09T05:21:50.759Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:27:08.303 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 01:27:08.303 nvme0n1 : 2.03 18681.41 72.97 0.00 0.00 6846.60 6181.56 31594.65 01:27:08.303 [2024-12-09T05:21:50.759Z] =================================================================================================================== 01:27:08.303 [2024-12-09T05:21:50.759Z] Total : 18681.41 72.97 0.00 0.00 6846.60 6181.56 31594.65 01:27:08.303 { 01:27:08.303 "results": [ 01:27:08.303 { 01:27:08.303 "job": "nvme0n1", 01:27:08.303 "core_mask": "0x2", 01:27:08.303 "workload": "randread", 01:27:08.303 "status": "finished", 01:27:08.303 "queue_depth": 128, 01:27:08.304 "io_size": 4096, 01:27:08.304 "runtime": 2.025918, 01:27:08.304 "iops": 18681.407638413795, 01:27:08.304 "mibps": 72.97424858755389, 01:27:08.304 "io_failed": 0, 01:27:08.304 "io_timeout": 0, 01:27:08.304 "avg_latency_us": 6846.602445816372, 01:27:08.304 "min_latency_us": 6181.561572052402, 01:27:08.304 "max_latency_us": 31594.648034934497 01:27:08.304 } 01:27:08.304 ], 01:27:08.304 "core_count": 1 01:27:08.304 } 01:27:08.563 05:21:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 01:27:08.563 05:21:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 01:27:08.563 05:21:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 01:27:08.563 05:21:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 01:27:08.563 05:21:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 01:27:08.563 | select(.opcode=="crc32c") 01:27:08.563 | "\(.module_name) \(.executed)"' 01:27:08.563 05:21:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 01:27:08.563 05:21:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 01:27:08.563 05:21:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 01:27:08.563 05:21:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 01:27:08.563 05:21:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79887 01:27:08.563 05:21:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 79887 ']' 01:27:08.563 05:21:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 79887 01:27:08.563 05:21:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 01:27:08.563 05:21:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:27:08.563 05:21:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79887 01:27:08.822 killing process with pid 79887 01:27:08.822 Received shutdown signal, test time was about 2.000000 seconds 01:27:08.822 01:27:08.822 Latency(us) 01:27:08.822 [2024-12-09T05:21:51.278Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:27:08.822 [2024-12-09T05:21:51.278Z] =================================================================================================================== 01:27:08.822 [2024-12-09T05:21:51.278Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:27:08.822 05:21:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:27:08.822 05:21:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:27:08.822 05:21:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79887' 01:27:08.822 05:21:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 79887 01:27:08.822 05:21:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 79887 01:27:09.081 05:21:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 01:27:09.081 05:21:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 01:27:09.081 05:21:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 01:27:09.081 05:21:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 01:27:09.081 05:21:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 01:27:09.081 05:21:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 01:27:09.081 05:21:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 01:27:09.081 05:21:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79953 01:27:09.081 05:21:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 01:27:09.081 05:21:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79953 /var/tmp/bperf.sock 01:27:09.081 05:21:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 79953 ']' 01:27:09.081 05:21:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 01:27:09.081 05:21:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 01:27:09.081 05:21:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:27:09.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:27:09.081 05:21:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 01:27:09.081 05:21:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 01:27:09.081 [2024-12-09 05:21:51.417457] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:27:09.081 [2024-12-09 05:21:51.417583] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6I/O size of 131072 is greater than zero copy threshold (65536). 01:27:09.081 Zero copy mechanism will not be used. 01:27:09.081 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79953 ] 01:27:09.339 [2024-12-09 05:21:51.570161] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:27:09.339 [2024-12-09 05:21:51.642047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:27:09.906 05:21:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:27:09.906 05:21:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 01:27:09.906 05:21:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 01:27:09.906 05:21:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 01:27:09.906 05:21:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 01:27:10.165 [2024-12-09 05:21:52.531784] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:27:10.165 05:21:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:27:10.165 05:21:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:27:10.424 nvme0n1 01:27:10.424 05:21:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 01:27:10.424 05:21:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:27:10.683 I/O size of 131072 is greater than zero copy threshold (65536). 01:27:10.683 Zero copy mechanism will not be used. 01:27:10.683 Running I/O for 2 seconds... 01:27:12.554 7120.00 IOPS, 890.00 MiB/s [2024-12-09T05:21:55.010Z] 7056.00 IOPS, 882.00 MiB/s 01:27:12.554 Latency(us) 01:27:12.554 [2024-12-09T05:21:55.010Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:27:12.554 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 01:27:12.554 nvme0n1 : 2.00 7052.78 881.60 0.00 0.00 2265.48 2046.21 3777.62 01:27:12.554 [2024-12-09T05:21:55.010Z] =================================================================================================================== 01:27:12.554 [2024-12-09T05:21:55.010Z] Total : 7052.78 881.60 0.00 0.00 2265.48 2046.21 3777.62 01:27:12.554 { 01:27:12.554 "results": [ 01:27:12.554 { 01:27:12.554 "job": "nvme0n1", 01:27:12.554 "core_mask": "0x2", 01:27:12.554 "workload": "randread", 01:27:12.554 "status": "finished", 01:27:12.554 "queue_depth": 16, 01:27:12.554 "io_size": 131072, 01:27:12.554 "runtime": 2.003181, 01:27:12.554 "iops": 7052.782549355251, 01:27:12.554 "mibps": 881.5978186694064, 01:27:12.554 "io_failed": 0, 01:27:12.554 "io_timeout": 0, 01:27:12.554 "avg_latency_us": 2265.4815629528157, 01:27:12.554 "min_latency_us": 2046.2113537117905, 01:27:12.554 "max_latency_us": 3777.62096069869 01:27:12.554 } 01:27:12.554 ], 01:27:12.554 "core_count": 1 01:27:12.554 } 01:27:12.554 05:21:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 01:27:12.554 05:21:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 01:27:12.554 05:21:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 01:27:12.554 | select(.opcode=="crc32c") 01:27:12.554 | "\(.module_name) \(.executed)"' 01:27:12.554 05:21:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 01:27:12.554 05:21:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 01:27:12.813 05:21:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 01:27:12.813 05:21:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 01:27:12.813 05:21:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 01:27:12.813 05:21:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 01:27:12.813 05:21:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79953 01:27:12.813 05:21:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 79953 ']' 01:27:12.813 05:21:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 79953 01:27:12.813 05:21:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 01:27:12.813 05:21:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:27:12.813 05:21:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79953 01:27:12.813 05:21:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:27:12.813 05:21:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:27:12.813 05:21:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79953' 01:27:12.813 killing process with pid 79953 01:27:12.813 05:21:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 79953 01:27:12.813 Received shutdown signal, test time was about 2.000000 seconds 01:27:12.813 01:27:12.813 Latency(us) 01:27:12.813 [2024-12-09T05:21:55.269Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:27:12.813 [2024-12-09T05:21:55.269Z] =================================================================================================================== 01:27:12.813 [2024-12-09T05:21:55.269Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:27:12.813 05:21:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 79953 01:27:13.380 05:21:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 01:27:13.380 05:21:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 01:27:13.380 05:21:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 01:27:13.380 05:21:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 01:27:13.380 05:21:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 01:27:13.380 05:21:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 01:27:13.380 05:21:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 01:27:13.380 05:21:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80008 01:27:13.380 05:21:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 01:27:13.380 05:21:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80008 /var/tmp/bperf.sock 01:27:13.380 05:21:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80008 ']' 01:27:13.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:27:13.380 05:21:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 01:27:13.380 05:21:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 01:27:13.380 05:21:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:27:13.380 05:21:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 01:27:13.380 05:21:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 01:27:13.380 [2024-12-09 05:21:55.601066] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:27:13.380 [2024-12-09 05:21:55.601186] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80008 ] 01:27:13.380 [2024-12-09 05:21:55.734031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:27:13.380 [2024-12-09 05:21:55.807335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:27:14.316 05:21:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:27:14.316 05:21:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 01:27:14.316 05:21:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 01:27:14.316 05:21:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 01:27:14.316 05:21:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 01:27:14.316 [2024-12-09 05:21:56.768297] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:27:14.576 05:21:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:27:14.576 05:21:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:27:14.835 nvme0n1 01:27:14.835 05:21:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 01:27:14.835 05:21:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:27:14.835 Running I/O for 2 seconds... 01:27:17.168 19178.00 IOPS, 74.91 MiB/s [2024-12-09T05:21:59.624Z] 20130.00 IOPS, 78.63 MiB/s 01:27:17.168 Latency(us) 01:27:17.168 [2024-12-09T05:21:59.624Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:27:17.168 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:27:17.168 nvme0n1 : 2.00 20158.09 78.74 0.00 0.00 6344.48 5551.96 13221.67 01:27:17.168 [2024-12-09T05:21:59.624Z] =================================================================================================================== 01:27:17.168 [2024-12-09T05:21:59.624Z] Total : 20158.09 78.74 0.00 0.00 6344.48 5551.96 13221.67 01:27:17.168 { 01:27:17.168 "results": [ 01:27:17.168 { 01:27:17.168 "job": "nvme0n1", 01:27:17.168 "core_mask": "0x2", 01:27:17.168 "workload": "randwrite", 01:27:17.168 "status": "finished", 01:27:17.168 "queue_depth": 128, 01:27:17.168 "io_size": 4096, 01:27:17.168 "runtime": 2.003563, 01:27:17.168 "iops": 20158.088365576725, 01:27:17.168 "mibps": 78.74253267803408, 01:27:17.168 "io_failed": 0, 01:27:17.168 "io_timeout": 0, 01:27:17.168 "avg_latency_us": 6344.479844374199, 01:27:17.168 "min_latency_us": 5551.95807860262, 01:27:17.168 "max_latency_us": 13221.673362445415 01:27:17.168 } 01:27:17.168 ], 01:27:17.168 "core_count": 1 01:27:17.168 } 01:27:17.168 05:21:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 01:27:17.168 05:21:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 01:27:17.168 05:21:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 01:27:17.168 05:21:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 01:27:17.168 | select(.opcode=="crc32c") 01:27:17.168 | "\(.module_name) \(.executed)"' 01:27:17.168 05:21:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 01:27:17.168 05:21:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 01:27:17.168 05:21:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 01:27:17.168 05:21:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 01:27:17.168 05:21:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 01:27:17.168 05:21:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80008 01:27:17.169 05:21:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80008 ']' 01:27:17.169 05:21:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80008 01:27:17.169 05:21:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 01:27:17.169 05:21:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:27:17.169 05:21:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80008 01:27:17.169 05:21:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:27:17.169 killing process with pid 80008 01:27:17.169 Received shutdown signal, test time was about 2.000000 seconds 01:27:17.169 01:27:17.169 Latency(us) 01:27:17.169 [2024-12-09T05:21:59.625Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:27:17.169 [2024-12-09T05:21:59.625Z] =================================================================================================================== 01:27:17.169 [2024-12-09T05:21:59.625Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:27:17.169 05:21:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:27:17.169 05:21:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80008' 01:27:17.169 05:21:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80008 01:27:17.169 05:21:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80008 01:27:17.428 05:21:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 01:27:17.429 05:21:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 01:27:17.429 05:21:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 01:27:17.429 05:21:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 01:27:17.429 05:21:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 01:27:17.429 05:21:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 01:27:17.429 05:21:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 01:27:17.429 05:21:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80069 01:27:17.429 05:21:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 01:27:17.429 05:21:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80069 /var/tmp/bperf.sock 01:27:17.429 05:21:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80069 ']' 01:27:17.429 05:21:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 01:27:17.429 05:21:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 01:27:17.429 05:21:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:27:17.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:27:17.429 05:21:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 01:27:17.429 05:21:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 01:27:17.429 I/O size of 131072 is greater than zero copy threshold (65536). 01:27:17.429 Zero copy mechanism will not be used. 01:27:17.429 [2024-12-09 05:21:59.842865] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:27:17.429 [2024-12-09 05:21:59.843023] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80069 ] 01:27:17.688 [2024-12-09 05:21:59.995183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:27:17.688 [2024-12-09 05:22:00.063676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:27:18.627 05:22:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:27:18.627 05:22:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 01:27:18.627 05:22:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 01:27:18.627 05:22:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 01:27:18.627 05:22:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 01:27:18.627 [2024-12-09 05:22:00.960467] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:27:18.627 05:22:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:27:18.627 05:22:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:27:19.195 nvme0n1 01:27:19.195 05:22:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 01:27:19.195 05:22:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:27:19.195 I/O size of 131072 is greater than zero copy threshold (65536). 01:27:19.195 Zero copy mechanism will not be used. 01:27:19.195 Running I/O for 2 seconds... 01:27:21.067 6247.00 IOPS, 780.88 MiB/s [2024-12-09T05:22:03.524Z] 6158.50 IOPS, 769.81 MiB/s 01:27:21.068 Latency(us) 01:27:21.068 [2024-12-09T05:22:03.524Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:27:21.068 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 01:27:21.068 nvme0n1 : 2.00 6156.01 769.50 0.00 0.00 2594.88 1774.34 10588.79 01:27:21.068 [2024-12-09T05:22:03.524Z] =================================================================================================================== 01:27:21.068 [2024-12-09T05:22:03.524Z] Total : 6156.01 769.50 0.00 0.00 2594.88 1774.34 10588.79 01:27:21.068 { 01:27:21.068 "results": [ 01:27:21.068 { 01:27:21.068 "job": "nvme0n1", 01:27:21.068 "core_mask": "0x2", 01:27:21.068 "workload": "randwrite", 01:27:21.068 "status": "finished", 01:27:21.068 "queue_depth": 16, 01:27:21.068 "io_size": 131072, 01:27:21.068 "runtime": 2.003407, 01:27:21.068 "iops": 6156.013231460208, 01:27:21.068 "mibps": 769.501653932526, 01:27:21.068 "io_failed": 0, 01:27:21.068 "io_timeout": 0, 01:27:21.068 "avg_latency_us": 2594.882989756244, 01:27:21.068 "min_latency_us": 1774.3371179039302, 01:27:21.068 "max_latency_us": 10588.786026200873 01:27:21.068 } 01:27:21.068 ], 01:27:21.068 "core_count": 1 01:27:21.068 } 01:27:21.068 05:22:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 01:27:21.068 05:22:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 01:27:21.068 05:22:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 01:27:21.068 05:22:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 01:27:21.068 | select(.opcode=="crc32c") 01:27:21.068 | "\(.module_name) \(.executed)"' 01:27:21.068 05:22:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 01:27:21.327 05:22:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 01:27:21.327 05:22:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 01:27:21.327 05:22:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 01:27:21.327 05:22:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 01:27:21.327 05:22:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80069 01:27:21.327 05:22:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80069 ']' 01:27:21.327 05:22:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80069 01:27:21.327 05:22:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 01:27:21.327 05:22:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:27:21.327 05:22:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80069 01:27:21.587 killing process with pid 80069 01:27:21.587 Received shutdown signal, test time was about 2.000000 seconds 01:27:21.587 01:27:21.587 Latency(us) 01:27:21.587 [2024-12-09T05:22:04.043Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:27:21.587 [2024-12-09T05:22:04.043Z] =================================================================================================================== 01:27:21.587 [2024-12-09T05:22:04.043Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:27:21.587 05:22:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:27:21.587 05:22:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:27:21.587 05:22:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80069' 01:27:21.587 05:22:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80069 01:27:21.587 05:22:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80069 01:27:21.845 05:22:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 79855 01:27:21.845 05:22:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 79855 ']' 01:27:21.845 05:22:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 79855 01:27:21.845 05:22:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 01:27:21.845 05:22:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:27:21.845 05:22:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79855 01:27:21.845 05:22:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:27:21.845 05:22:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:27:21.845 05:22:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79855' 01:27:21.845 killing process with pid 79855 01:27:21.845 05:22:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 79855 01:27:21.845 05:22:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 79855 01:27:22.104 01:27:22.104 real 0m18.427s 01:27:22.104 user 0m34.414s 01:27:22.104 sys 0m5.396s 01:27:22.104 ************************************ 01:27:22.104 END TEST nvmf_digest_clean 01:27:22.104 ************************************ 01:27:22.104 05:22:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 01:27:22.104 05:22:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 01:27:22.104 05:22:04 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 01:27:22.105 05:22:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:27:22.105 05:22:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 01:27:22.105 05:22:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 01:27:22.105 ************************************ 01:27:22.105 START TEST nvmf_digest_error 01:27:22.105 ************************************ 01:27:22.105 05:22:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 01:27:22.105 05:22:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 01:27:22.105 05:22:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:27:22.105 05:22:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 01:27:22.105 05:22:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:27:22.105 05:22:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=80152 01:27:22.105 05:22:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 01:27:22.105 05:22:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 80152 01:27:22.105 05:22:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80152 ']' 01:27:22.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:27:22.105 05:22:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:27:22.105 05:22:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 01:27:22.105 05:22:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:27:22.105 05:22:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 01:27:22.105 05:22:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:27:22.105 [2024-12-09 05:22:04.511639] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:27:22.105 [2024-12-09 05:22:04.511807] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:27:22.363 [2024-12-09 05:22:04.649619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:27:22.363 [2024-12-09 05:22:04.701531] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:27:22.363 [2024-12-09 05:22:04.701572] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:27:22.363 [2024-12-09 05:22:04.701581] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:27:22.363 [2024-12-09 05:22:04.701588] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:27:22.363 [2024-12-09 05:22:04.701593] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:27:22.363 [2024-12-09 05:22:04.701938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:27:23.299 05:22:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:27:23.299 05:22:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 01:27:23.299 05:22:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:27:23.299 05:22:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 01:27:23.299 05:22:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:27:23.299 05:22:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:27:23.299 05:22:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 01:27:23.299 05:22:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:23.299 05:22:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:27:23.299 [2024-12-09 05:22:05.448964] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 01:27:23.299 05:22:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:23.299 05:22:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 01:27:23.299 05:22:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 01:27:23.299 05:22:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:23.299 05:22:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:27:23.299 [2024-12-09 05:22:05.497278] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:27:23.299 null0 01:27:23.299 [2024-12-09 05:22:05.543283] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:27:23.299 [2024-12-09 05:22:05.567393] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:27:23.299 05:22:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:23.299 05:22:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 01:27:23.299 05:22:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 01:27:23.299 05:22:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 01:27:23.299 05:22:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 01:27:23.299 05:22:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 01:27:23.299 05:22:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80184 01:27:23.299 05:22:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 01:27:23.299 05:22:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80184 /var/tmp/bperf.sock 01:27:23.299 05:22:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80184 ']' 01:27:23.299 05:22:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 01:27:23.299 05:22:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 01:27:23.299 05:22:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:27:23.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:27:23.299 05:22:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 01:27:23.299 05:22:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:27:23.299 [2024-12-09 05:22:05.627102] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:27:23.299 [2024-12-09 05:22:05.627264] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80184 ] 01:27:23.557 [2024-12-09 05:22:05.772673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:27:23.557 [2024-12-09 05:22:05.824041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:27:23.557 [2024-12-09 05:22:05.865361] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:27:24.139 05:22:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:27:24.139 05:22:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 01:27:24.139 05:22:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 01:27:24.139 05:22:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 01:27:24.397 05:22:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 01:27:24.397 05:22:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:24.397 05:22:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:27:24.397 05:22:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:24.397 05:22:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:27:24.397 05:22:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:27:24.657 nvme0n1 01:27:24.657 05:22:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 01:27:24.657 05:22:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:24.657 05:22:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:27:24.657 05:22:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:24.657 05:22:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 01:27:24.657 05:22:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:27:24.657 Running I/O for 2 seconds... 01:27:24.657 [2024-12-09 05:22:07.100772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:24.657 [2024-12-09 05:22:07.100826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:24.657 [2024-12-09 05:22:07.100836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:24.916 [2024-12-09 05:22:07.114305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:24.916 [2024-12-09 05:22:07.114358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:24.916 [2024-12-09 05:22:07.114367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:24.916 [2024-12-09 05:22:07.127938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:24.916 [2024-12-09 05:22:07.128030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:24.916 [2024-12-09 05:22:07.128041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:24.916 [2024-12-09 05:22:07.141406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:24.916 [2024-12-09 05:22:07.141445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:24.916 [2024-12-09 05:22:07.141453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:24.916 [2024-12-09 05:22:07.154642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:24.916 [2024-12-09 05:22:07.154676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:24.916 [2024-12-09 05:22:07.154683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:24.916 [2024-12-09 05:22:07.167900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:24.916 [2024-12-09 05:22:07.167932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:24.916 [2024-12-09 05:22:07.167940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:24.916 [2024-12-09 05:22:07.180898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:24.916 [2024-12-09 05:22:07.180928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:24.916 [2024-12-09 05:22:07.180935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:24.917 [2024-12-09 05:22:07.193971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:24.917 [2024-12-09 05:22:07.194001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:24.917 [2024-12-09 05:22:07.194008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:24.917 [2024-12-09 05:22:07.206979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:24.917 [2024-12-09 05:22:07.207012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:24.917 [2024-12-09 05:22:07.207020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:24.917 [2024-12-09 05:22:07.219970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:24.917 [2024-12-09 05:22:07.220001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:22506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:24.917 [2024-12-09 05:22:07.220008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:24.917 [2024-12-09 05:22:07.233100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:24.917 [2024-12-09 05:22:07.233130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:8991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:24.917 [2024-12-09 05:22:07.233138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:24.917 [2024-12-09 05:22:07.246429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:24.917 [2024-12-09 05:22:07.246458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:25017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:24.917 [2024-12-09 05:22:07.246466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:24.917 [2024-12-09 05:22:07.259857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:24.917 [2024-12-09 05:22:07.259946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:25437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:24.917 [2024-12-09 05:22:07.259956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:24.917 [2024-12-09 05:22:07.273467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:24.917 [2024-12-09 05:22:07.273500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:24.917 [2024-12-09 05:22:07.273507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:24.917 [2024-12-09 05:22:07.286719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:24.917 [2024-12-09 05:22:07.286791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:15153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:24.917 [2024-12-09 05:22:07.286799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:24.917 [2024-12-09 05:22:07.299914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:24.917 [2024-12-09 05:22:07.299944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:13439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:24.917 [2024-12-09 05:22:07.299951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:24.917 [2024-12-09 05:22:07.312896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:24.917 [2024-12-09 05:22:07.312926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:1975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:24.917 [2024-12-09 05:22:07.312933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:24.917 [2024-12-09 05:22:07.326003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:24.917 [2024-12-09 05:22:07.326035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:18680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:24.917 [2024-12-09 05:22:07.326042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:24.917 [2024-12-09 05:22:07.339018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:24.917 [2024-12-09 05:22:07.339051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:10505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:24.917 [2024-12-09 05:22:07.339058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:24.917 [2024-12-09 05:22:07.352017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:24.917 [2024-12-09 05:22:07.352049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:24726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:24.917 [2024-12-09 05:22:07.352056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:24.917 [2024-12-09 05:22:07.365027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:24.917 [2024-12-09 05:22:07.365057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:20850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:24.917 [2024-12-09 05:22:07.365064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:25.177 [2024-12-09 05:22:07.378016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:25.177 [2024-12-09 05:22:07.378047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:25.177 [2024-12-09 05:22:07.378054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:25.177 [2024-12-09 05:22:07.390995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:25.177 [2024-12-09 05:22:07.391024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:17307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:25.177 [2024-12-09 05:22:07.391031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:25.177 [2024-12-09 05:22:07.403987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:25.177 [2024-12-09 05:22:07.404017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:13275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:25.177 [2024-12-09 05:22:07.404024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:25.177 [2024-12-09 05:22:07.416971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:25.177 [2024-12-09 05:22:07.417000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:9145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:25.177 [2024-12-09 05:22:07.417008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:25.177 [2024-12-09 05:22:07.429975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:25.177 [2024-12-09 05:22:07.430046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:25.177 [2024-12-09 05:22:07.430055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:25.177 [2024-12-09 05:22:07.443108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:25.177 [2024-12-09 05:22:07.443139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:10182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:25.178 [2024-12-09 05:22:07.443146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:25.178 [2024-12-09 05:22:07.456123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:25.178 [2024-12-09 05:22:07.456153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:9628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:25.178 [2024-12-09 05:22:07.456160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:25.178 [2024-12-09 05:22:07.469781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:25.178 [2024-12-09 05:22:07.469812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:9894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:25.178 [2024-12-09 05:22:07.469819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:25.178 [2024-12-09 05:22:07.483128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:25.178 [2024-12-09 05:22:07.483169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:25.178 [2024-12-09 05:22:07.483176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:25.178 [2024-12-09 05:22:07.496829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:25.178 [2024-12-09 05:22:07.496900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:14941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:25.178 [2024-12-09 05:22:07.496909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:25.178 [2024-12-09 05:22:07.509954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:25.178 [2024-12-09 05:22:07.509984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:25.178 [2024-12-09 05:22:07.509991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:25.178 [2024-12-09 05:22:07.522971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:25.178 [2024-12-09 05:22:07.523000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:17729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:25.178 [2024-12-09 05:22:07.523007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:25.178 [2024-12-09 05:22:07.536014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:25.178 [2024-12-09 05:22:07.536044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:22057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:25.178 [2024-12-09 05:22:07.536051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:25.178 [2024-12-09 05:22:07.549043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:25.178 [2024-12-09 05:22:07.549075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:10786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:25.178 [2024-12-09 05:22:07.549082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:25.178 [2024-12-09 05:22:07.562111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:25.178 [2024-12-09 05:22:07.562142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:24827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:25.178 [2024-12-09 05:22:07.562149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:25.178 [2024-12-09 05:22:07.575121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:25.178 [2024-12-09 05:22:07.575152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:8067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:25.178 [2024-12-09 05:22:07.575160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:25.178 [2024-12-09 05:22:07.588117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:25.178 [2024-12-09 05:22:07.588147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:25192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:25.178 [2024-12-09 05:22:07.588154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:25.178 [2024-12-09 05:22:07.601119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:25.178 [2024-12-09 05:22:07.601187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:24006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:25.178 [2024-12-09 05:22:07.601196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:25.178 [2024-12-09 05:22:07.614161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:25.178 [2024-12-09 05:22:07.614192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:12898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:25.178 [2024-12-09 05:22:07.614199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:25.178 [2024-12-09 05:22:07.627110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:25.178 [2024-12-09 05:22:07.627140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:15448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:25.178 [2024-12-09 05:22:07.627146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:25.438 [2024-12-09 05:22:07.640139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:25.438 [2024-12-09 05:22:07.640168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:13860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:25.438 [2024-12-09 05:22:07.640175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:25.438 [2024-12-09 05:22:07.653170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:25.438 [2024-12-09 05:22:07.653200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:8681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:25.438 [2024-12-09 05:22:07.653207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:25.438 [2024-12-09 05:22:07.666150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:25.438 [2024-12-09 05:22:07.666180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:4572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:25.438 [2024-12-09 05:22:07.666187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:25.438 [2024-12-09 05:22:07.679758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:25.438 [2024-12-09 05:22:07.679788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:15256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:25.438 [2024-12-09 05:22:07.679795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:25.438 [2024-12-09 05:22:07.693259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:25.438 [2024-12-09 05:22:07.693290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:10594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:25.438 [2024-12-09 05:22:07.693298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:25.438 [2024-12-09 05:22:07.706585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:25.438 [2024-12-09 05:22:07.706616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:13088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:25.438 [2024-12-09 05:22:07.706623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:25.438 [2024-12-09 05:22:07.719705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:25.438 [2024-12-09 05:22:07.719733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:20453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:25.438 [2024-12-09 05:22:07.719740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:25.438 [2024-12-09 05:22:07.732719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:25.438 [2024-12-09 05:22:07.732747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:12300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:25.438 [2024-12-09 05:22:07.732754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:25.438 [2024-12-09 05:22:07.745735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:25.438 [2024-12-09 05:22:07.745763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:5785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:25.438 [2024-12-09 05:22:07.745769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:25.438 [2024-12-09 05:22:07.758765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:25.438 [2024-12-09 05:22:07.758795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:14313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:25.438 [2024-12-09 05:22:07.758802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:25.438 [2024-12-09 05:22:07.773067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:25.438 [2024-12-09 05:22:07.773099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:23148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:25.438 [2024-12-09 05:22:07.773106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:25.438 [2024-12-09 05:22:07.788627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:25.438 [2024-12-09 05:22:07.788716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:9583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:25.438 [2024-12-09 05:22:07.788726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:25.438 [2024-12-09 05:22:07.804533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:25.438 [2024-12-09 05:22:07.804567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:23727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:25.438 [2024-12-09 05:22:07.804575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:25.438 [2024-12-09 05:22:07.820832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:25.438 [2024-12-09 05:22:07.820870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:23223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:25.438 [2024-12-09 05:22:07.820880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:25.438 [2024-12-09 05:22:07.836027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:25.438 [2024-12-09 05:22:07.836115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:25.438 [2024-12-09 05:22:07.836125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:25.438 [2024-12-09 05:22:07.851738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:25.438 [2024-12-09 05:22:07.851799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:19049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:25.438 [2024-12-09 05:22:07.851810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:25.438 [2024-12-09 05:22:07.867486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:25.438 [2024-12-09 05:22:07.867521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:24824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:25.438 [2024-12-09 05:22:07.867529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:25.438 [2024-12-09 05:22:07.883165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:25.438 [2024-12-09 05:22:07.883195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:13291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:25.438 [2024-12-09 05:22:07.883212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:25.698 [2024-12-09 05:22:07.898925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:25.698 [2024-12-09 05:22:07.898955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:15320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:25.698 [2024-12-09 05:22:07.898962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:25.698 [2024-12-09 05:22:07.914669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:25.698 [2024-12-09 05:22:07.914697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:6804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:25.698 [2024-12-09 05:22:07.914705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:25.698 [2024-12-09 05:22:07.930346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:25.698 [2024-12-09 05:22:07.930377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:24890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:25.698 [2024-12-09 05:22:07.930386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:25.698 [2024-12-09 05:22:07.946141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:25.698 [2024-12-09 05:22:07.946178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:17367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:25.698 [2024-12-09 05:22:07.946188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:25.698 [2024-12-09 05:22:07.968933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:25.699 [2024-12-09 05:22:07.969021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:11207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:25.699 [2024-12-09 05:22:07.969032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:25.699 [2024-12-09 05:22:07.984729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:25.699 [2024-12-09 05:22:07.984767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:9338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:25.699 [2024-12-09 05:22:07.984776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:25.699 [2024-12-09 05:22:08.000755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:25.699 [2024-12-09 05:22:08.000787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:2474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:25.699 [2024-12-09 05:22:08.000794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:25.699 [2024-12-09 05:22:08.016535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:25.699 [2024-12-09 05:22:08.016583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:4285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:25.699 [2024-12-09 05:22:08.016591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:25.699 [2024-12-09 05:22:08.032426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:25.699 [2024-12-09 05:22:08.032483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:3657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:25.699 [2024-12-09 05:22:08.032492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:25.699 [2024-12-09 05:22:08.048306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:25.699 [2024-12-09 05:22:08.048414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:25.699 [2024-12-09 05:22:08.048425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:25.699 [2024-12-09 05:22:08.064442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:25.699 [2024-12-09 05:22:08.064478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:4846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:25.699 [2024-12-09 05:22:08.064488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:25.699 17964.00 IOPS, 70.17 MiB/s [2024-12-09T05:22:08.155Z] [2024-12-09 05:22:08.081594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:25.699 [2024-12-09 05:22:08.081626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:25.699 [2024-12-09 05:22:08.081633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:25.699 [2024-12-09 05:22:08.097283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:25.699 [2024-12-09 05:22:08.097386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:22141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:25.699 [2024-12-09 05:22:08.097395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:25.699 [2024-12-09 05:22:08.113020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:25.699 [2024-12-09 05:22:08.113090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:10827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:25.699 [2024-12-09 05:22:08.113099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:25.699 [2024-12-09 05:22:08.128841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:25.699 [2024-12-09 05:22:08.128872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:29 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:25.699 [2024-12-09 05:22:08.128880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:25.699 [2024-12-09 05:22:08.144574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:25.699 [2024-12-09 05:22:08.144605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:9259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:25.699 [2024-12-09 05:22:08.144625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:25.959 [2024-12-09 05:22:08.160299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:25.959 [2024-12-09 05:22:08.160349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:7406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:25.959 [2024-12-09 05:22:08.160359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:25.959 [2024-12-09 05:22:08.176159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:25.959 [2024-12-09 05:22:08.176195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:12282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:25.959 [2024-12-09 05:22:08.176205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:25.959 [2024-12-09 05:22:08.192056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:25.959 [2024-12-09 05:22:08.192094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:9401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:25.959 [2024-12-09 05:22:08.192104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:25.959 [2024-12-09 05:22:08.207898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:25.959 [2024-12-09 05:22:08.207982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:8934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:25.959 [2024-12-09 05:22:08.207993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:25.959 [2024-12-09 05:22:08.223664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:25.959 [2024-12-09 05:22:08.223700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:2459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:25.959 [2024-12-09 05:22:08.223709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:25.959 [2024-12-09 05:22:08.239543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:25.959 [2024-12-09 05:22:08.239576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:4841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:25.959 [2024-12-09 05:22:08.239585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:25.959 [2024-12-09 05:22:08.255421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:25.959 [2024-12-09 05:22:08.255457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:1722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:25.959 [2024-12-09 05:22:08.255466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:25.959 [2024-12-09 05:22:08.271371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:25.959 [2024-12-09 05:22:08.271406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:3839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:25.959 [2024-12-09 05:22:08.271416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:25.959 [2024-12-09 05:22:08.287145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:25.959 [2024-12-09 05:22:08.287176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:18930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:25.959 [2024-12-09 05:22:08.287183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:25.959 [2024-12-09 05:22:08.303034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:25.959 [2024-12-09 05:22:08.303066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:25.959 [2024-12-09 05:22:08.303074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:25.959 [2024-12-09 05:22:08.318860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:25.959 [2024-12-09 05:22:08.318889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:19589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:25.959 [2024-12-09 05:22:08.318896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:25.959 [2024-12-09 05:22:08.334498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:25.959 [2024-12-09 05:22:08.334541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:11883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:25.959 [2024-12-09 05:22:08.334549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:25.959 [2024-12-09 05:22:08.350070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:25.959 [2024-12-09 05:22:08.350107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:11983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:25.959 [2024-12-09 05:22:08.350116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:25.959 [2024-12-09 05:22:08.365772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:25.960 [2024-12-09 05:22:08.365807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:14431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:25.960 [2024-12-09 05:22:08.365816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:25.960 [2024-12-09 05:22:08.381452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:25.960 [2024-12-09 05:22:08.381486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:23280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:25.960 [2024-12-09 05:22:08.381496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:25.960 [2024-12-09 05:22:08.396949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:25.960 [2024-12-09 05:22:08.397036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:3336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:25.960 [2024-12-09 05:22:08.397047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:25.960 [2024-12-09 05:22:08.412522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:25.960 [2024-12-09 05:22:08.412558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:8833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:25.960 [2024-12-09 05:22:08.412567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:26.219 [2024-12-09 05:22:08.428074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:26.219 [2024-12-09 05:22:08.428158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:12329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:26.219 [2024-12-09 05:22:08.428169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:26.219 [2024-12-09 05:22:08.443768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:26.219 [2024-12-09 05:22:08.443804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:9024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:26.219 [2024-12-09 05:22:08.443813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:26.219 [2024-12-09 05:22:08.459439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:26.219 [2024-12-09 05:22:08.459474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:13993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:26.219 [2024-12-09 05:22:08.459483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:26.219 [2024-12-09 05:22:08.474881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:26.219 [2024-12-09 05:22:08.474934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:26.219 [2024-12-09 05:22:08.474944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:26.219 [2024-12-09 05:22:08.490422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:26.220 [2024-12-09 05:22:08.490457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:23482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:26.220 [2024-12-09 05:22:08.490467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:26.220 [2024-12-09 05:22:08.505882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:26.220 [2024-12-09 05:22:08.505916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:26.220 [2024-12-09 05:22:08.505925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:26.220 [2024-12-09 05:22:08.521505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:26.220 [2024-12-09 05:22:08.521536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:5391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:26.220 [2024-12-09 05:22:08.521543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:26.220 [2024-12-09 05:22:08.537045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:26.220 [2024-12-09 05:22:08.537133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:21190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:26.220 [2024-12-09 05:22:08.537142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:26.220 [2024-12-09 05:22:08.552773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:26.220 [2024-12-09 05:22:08.552804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:11787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:26.220 [2024-12-09 05:22:08.552812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:26.220 [2024-12-09 05:22:08.568553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:26.220 [2024-12-09 05:22:08.568582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:4308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:26.220 [2024-12-09 05:22:08.568589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:26.220 [2024-12-09 05:22:08.584290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:26.220 [2024-12-09 05:22:08.584339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:25069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:26.220 [2024-12-09 05:22:08.584349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:26.220 [2024-12-09 05:22:08.599834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:26.220 [2024-12-09 05:22:08.599867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:23846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:26.220 [2024-12-09 05:22:08.599876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:26.220 [2024-12-09 05:22:08.615448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:26.220 [2024-12-09 05:22:08.615481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:18694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:26.220 [2024-12-09 05:22:08.615490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:26.220 [2024-12-09 05:22:08.631025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:26.220 [2024-12-09 05:22:08.631056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:9428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:26.220 [2024-12-09 05:22:08.631063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:26.220 [2024-12-09 05:22:08.646794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:26.220 [2024-12-09 05:22:08.646822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:26.220 [2024-12-09 05:22:08.646830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:26.220 [2024-12-09 05:22:08.662347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:26.220 [2024-12-09 05:22:08.662381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:11759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:26.220 [2024-12-09 05:22:08.662390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:26.480 [2024-12-09 05:22:08.677863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:26.480 [2024-12-09 05:22:08.677898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:2947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:26.480 [2024-12-09 05:22:08.677907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:26.480 [2024-12-09 05:22:08.693474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:26.480 [2024-12-09 05:22:08.693509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:17455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:26.480 [2024-12-09 05:22:08.693519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:26.480 [2024-12-09 05:22:08.708982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:26.480 [2024-12-09 05:22:08.709067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:10627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:26.480 [2024-12-09 05:22:08.709079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:26.480 [2024-12-09 05:22:08.724531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:26.480 [2024-12-09 05:22:08.724568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:2209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:26.480 [2024-12-09 05:22:08.724578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:26.480 [2024-12-09 05:22:08.740029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:26.480 [2024-12-09 05:22:08.740111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:21325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:26.480 [2024-12-09 05:22:08.740122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:26.480 [2024-12-09 05:22:08.755690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:26.480 [2024-12-09 05:22:08.755725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:12378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:26.480 [2024-12-09 05:22:08.755734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:26.480 [2024-12-09 05:22:08.770176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:26.480 [2024-12-09 05:22:08.770207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:17741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:26.480 [2024-12-09 05:22:08.770214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:26.480 [2024-12-09 05:22:08.783195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:26.480 [2024-12-09 05:22:08.783231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:23079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:26.480 [2024-12-09 05:22:08.783238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:26.480 [2024-12-09 05:22:08.796206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:26.480 [2024-12-09 05:22:08.796238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:17181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:26.480 [2024-12-09 05:22:08.796245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:26.480 [2024-12-09 05:22:08.809215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:26.480 [2024-12-09 05:22:08.809245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:17211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:26.480 [2024-12-09 05:22:08.809252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:26.480 [2024-12-09 05:22:08.822784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:26.480 [2024-12-09 05:22:08.822813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:2485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:26.480 [2024-12-09 05:22:08.822820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:26.480 [2024-12-09 05:22:08.837006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:26.480 [2024-12-09 05:22:08.837038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:26.480 [2024-12-09 05:22:08.837046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:26.480 [2024-12-09 05:22:08.850846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:26.480 [2024-12-09 05:22:08.850874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:26.480 [2024-12-09 05:22:08.850881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:26.480 [2024-12-09 05:22:08.864547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:26.481 [2024-12-09 05:22:08.864575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:26.481 [2024-12-09 05:22:08.864582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:26.481 [2024-12-09 05:22:08.877751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:26.481 [2024-12-09 05:22:08.877786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:26.481 [2024-12-09 05:22:08.877793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:26.481 [2024-12-09 05:22:08.890816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:26.481 [2024-12-09 05:22:08.890903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:26.481 [2024-12-09 05:22:08.890912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:26.481 [2024-12-09 05:22:08.903969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:26.481 [2024-12-09 05:22:08.903998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:26.481 [2024-12-09 05:22:08.904005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:26.481 [2024-12-09 05:22:08.916971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:26.481 [2024-12-09 05:22:08.916999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:26.481 [2024-12-09 05:22:08.917007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:26.481 [2024-12-09 05:22:08.929916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:26.481 [2024-12-09 05:22:08.929942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:26.481 [2024-12-09 05:22:08.929949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:26.741 [2024-12-09 05:22:08.948488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:26.741 [2024-12-09 05:22:08.948555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:26.741 [2024-12-09 05:22:08.948564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:26.741 [2024-12-09 05:22:08.961408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:26.741 [2024-12-09 05:22:08.961437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:26.741 [2024-12-09 05:22:08.961444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:26.741 [2024-12-09 05:22:08.974290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:26.741 [2024-12-09 05:22:08.974321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:26.741 [2024-12-09 05:22:08.974343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:26.741 [2024-12-09 05:22:08.987158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:26.741 [2024-12-09 05:22:08.987188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:26.741 [2024-12-09 05:22:08.987195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:26.741 [2024-12-09 05:22:09.000120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:26.741 [2024-12-09 05:22:09.000150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:26.741 [2024-12-09 05:22:09.000158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:26.741 [2024-12-09 05:22:09.013137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:26.741 [2024-12-09 05:22:09.013167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:26.741 [2024-12-09 05:22:09.013174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:26.741 [2024-12-09 05:22:09.026023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:26.741 [2024-12-09 05:22:09.026105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:26.741 [2024-12-09 05:22:09.026114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:26.741 [2024-12-09 05:22:09.038996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:26.741 [2024-12-09 05:22:09.039026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:26.741 [2024-12-09 05:22:09.039033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:26.741 [2024-12-09 05:22:09.051973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:26.741 [2024-12-09 05:22:09.052003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:2398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:26.741 [2024-12-09 05:22:09.052010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:26.741 [2024-12-09 05:22:09.065607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdcdfb0) 01:27:26.741 [2024-12-09 05:22:09.065648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:14214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:26.741 [2024-12-09 05:22:09.065656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:26.741 17521.00 IOPS, 68.44 MiB/s 01:27:26.741 Latency(us) 01:27:26.741 [2024-12-09T05:22:09.197Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:27:26.741 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 01:27:26.741 nvme0n1 : 2.01 17540.28 68.52 0.00 0.00 7292.72 6324.65 30678.86 01:27:26.741 [2024-12-09T05:22:09.197Z] =================================================================================================================== 01:27:26.741 [2024-12-09T05:22:09.197Z] Total : 17540.28 68.52 0.00 0.00 7292.72 6324.65 30678.86 01:27:26.741 { 01:27:26.741 "results": [ 01:27:26.741 { 01:27:26.741 "job": "nvme0n1", 01:27:26.741 "core_mask": "0x2", 01:27:26.741 "workload": "randread", 01:27:26.741 "status": "finished", 01:27:26.741 "queue_depth": 128, 01:27:26.741 "io_size": 4096, 01:27:26.741 "runtime": 2.005099, 01:27:26.741 "iops": 17540.28105345422, 01:27:26.742 "mibps": 68.51672286505554, 01:27:26.742 "io_failed": 0, 01:27:26.742 "io_timeout": 0, 01:27:26.742 "avg_latency_us": 7292.718079446928, 01:27:26.742 "min_latency_us": 6324.65327510917, 01:27:26.742 "max_latency_us": 30678.86113537118 01:27:26.742 } 01:27:26.742 ], 01:27:26.742 "core_count": 1 01:27:26.742 } 01:27:26.742 05:22:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 01:27:26.742 05:22:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 01:27:26.742 | .driver_specific 01:27:26.742 | .nvme_error 01:27:26.742 | .status_code 01:27:26.742 | .command_transient_transport_error' 01:27:26.742 05:22:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 01:27:26.742 05:22:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 01:27:27.001 05:22:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 137 > 0 )) 01:27:27.001 05:22:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80184 01:27:27.001 05:22:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80184 ']' 01:27:27.001 05:22:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80184 01:27:27.001 05:22:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 01:27:27.001 05:22:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:27:27.001 05:22:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80184 01:27:27.001 killing process with pid 80184 01:27:27.001 Received shutdown signal, test time was about 2.000000 seconds 01:27:27.001 01:27:27.001 Latency(us) 01:27:27.001 [2024-12-09T05:22:09.457Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:27:27.001 [2024-12-09T05:22:09.457Z] =================================================================================================================== 01:27:27.001 [2024-12-09T05:22:09.457Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:27:27.001 05:22:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:27:27.001 05:22:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:27:27.001 05:22:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80184' 01:27:27.001 05:22:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80184 01:27:27.001 05:22:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80184 01:27:27.260 05:22:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 01:27:27.260 05:22:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 01:27:27.260 05:22:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 01:27:27.260 05:22:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 01:27:27.260 05:22:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 01:27:27.260 05:22:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80244 01:27:27.260 05:22:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 01:27:27.260 05:22:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80244 /var/tmp/bperf.sock 01:27:27.260 05:22:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80244 ']' 01:27:27.260 05:22:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 01:27:27.260 05:22:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 01:27:27.260 05:22:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:27:27.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:27:27.260 05:22:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 01:27:27.260 05:22:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:27:27.260 [2024-12-09 05:22:09.602955] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:27:27.260 [2024-12-09 05:22:09.603074] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6I/O size of 131072 is greater than zero copy threshold (65536). 01:27:27.260 Zero copy mechanism will not be used. 01:27:27.260 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80244 ] 01:27:27.519 [2024-12-09 05:22:09.753938] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:27:27.519 [2024-12-09 05:22:09.799036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:27:27.519 [2024-12-09 05:22:09.840204] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:27:28.087 05:22:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:27:28.087 05:22:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 01:27:28.087 05:22:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 01:27:28.087 05:22:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 01:27:28.348 05:22:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 01:27:28.348 05:22:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:28.348 05:22:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:27:28.348 05:22:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:28.348 05:22:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:27:28.348 05:22:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:27:28.606 nvme0n1 01:27:28.606 05:22:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 01:27:28.606 05:22:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:28.606 05:22:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:27:28.606 05:22:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:28.606 05:22:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 01:27:28.607 05:22:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:27:28.607 I/O size of 131072 is greater than zero copy threshold (65536). 01:27:28.607 Zero copy mechanism will not be used. 01:27:28.607 Running I/O for 2 seconds... 01:27:28.867 [2024-12-09 05:22:11.065597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:28.867 [2024-12-09 05:22:11.065646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:28.867 [2024-12-09 05:22:11.065655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:28.867 [2024-12-09 05:22:11.069723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:28.867 [2024-12-09 05:22:11.069757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:28.867 [2024-12-09 05:22:11.069766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:28.867 [2024-12-09 05:22:11.073446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:28.867 [2024-12-09 05:22:11.073477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:28.867 [2024-12-09 05:22:11.073485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:28.867 [2024-12-09 05:22:11.077423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:28.867 [2024-12-09 05:22:11.077454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:28.867 [2024-12-09 05:22:11.077461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:28.867 [2024-12-09 05:22:11.081133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:28.867 [2024-12-09 05:22:11.081204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:28.867 [2024-12-09 05:22:11.081213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:28.867 [2024-12-09 05:22:11.085026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:28.867 [2024-12-09 05:22:11.085063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:28.867 [2024-12-09 05:22:11.085073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:28.867 [2024-12-09 05:22:11.088852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:28.867 [2024-12-09 05:22:11.088884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:28.867 [2024-12-09 05:22:11.088892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:28.867 [2024-12-09 05:22:11.092691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:28.867 [2024-12-09 05:22:11.092726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:28.867 [2024-12-09 05:22:11.092736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:28.867 [2024-12-09 05:22:11.096519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:28.867 [2024-12-09 05:22:11.096549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:28.867 [2024-12-09 05:22:11.096557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:28.867 [2024-12-09 05:22:11.100242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:28.867 [2024-12-09 05:22:11.100354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:28.867 [2024-12-09 05:22:11.100371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:28.867 [2024-12-09 05:22:11.104184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:28.867 [2024-12-09 05:22:11.104262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:28.867 [2024-12-09 05:22:11.104271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:28.867 [2024-12-09 05:22:11.107928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:28.867 [2024-12-09 05:22:11.107961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:28.867 [2024-12-09 05:22:11.107969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:28.867 [2024-12-09 05:22:11.111916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:28.867 [2024-12-09 05:22:11.111949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:28.867 [2024-12-09 05:22:11.111957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:28.867 [2024-12-09 05:22:11.115602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:28.867 [2024-12-09 05:22:11.115631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:28.867 [2024-12-09 05:22:11.115638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:28.867 [2024-12-09 05:22:11.119319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:28.867 [2024-12-09 05:22:11.119419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:28.867 [2024-12-09 05:22:11.119430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:28.867 [2024-12-09 05:22:11.123390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:28.867 [2024-12-09 05:22:11.123422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:28.867 [2024-12-09 05:22:11.123430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:28.867 [2024-12-09 05:22:11.127120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:28.867 [2024-12-09 05:22:11.127186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:28.867 [2024-12-09 05:22:11.127195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:28.867 [2024-12-09 05:22:11.131175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:28.867 [2024-12-09 05:22:11.131213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:28.867 [2024-12-09 05:22:11.131221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:28.867 [2024-12-09 05:22:11.135095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:28.867 [2024-12-09 05:22:11.135125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:28.867 [2024-12-09 05:22:11.135132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:28.867 [2024-12-09 05:22:11.139114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:28.867 [2024-12-09 05:22:11.139144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:28.867 [2024-12-09 05:22:11.139152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:28.867 [2024-12-09 05:22:11.142952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:28.867 [2024-12-09 05:22:11.142982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:28.867 [2024-12-09 05:22:11.142989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:28.867 [2024-12-09 05:22:11.146986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:28.867 [2024-12-09 05:22:11.147015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:28.867 [2024-12-09 05:22:11.147023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:28.867 [2024-12-09 05:22:11.150796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:28.867 [2024-12-09 05:22:11.150825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:28.867 [2024-12-09 05:22:11.150832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:28.867 [2024-12-09 05:22:11.154801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:28.867 [2024-12-09 05:22:11.154832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:28.867 [2024-12-09 05:22:11.154839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:28.867 [2024-12-09 05:22:11.158474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:28.867 [2024-12-09 05:22:11.158545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:28.867 [2024-12-09 05:22:11.158577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:28.867 [2024-12-09 05:22:11.162466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:28.867 [2024-12-09 05:22:11.162540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:28.867 [2024-12-09 05:22:11.162571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:28.867 [2024-12-09 05:22:11.166237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:28.867 [2024-12-09 05:22:11.166309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:28.867 [2024-12-09 05:22:11.166354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:28.867 [2024-12-09 05:22:11.170285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:28.867 [2024-12-09 05:22:11.170382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:28.867 [2024-12-09 05:22:11.170415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:28.867 [2024-12-09 05:22:11.174044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:28.867 [2024-12-09 05:22:11.174134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:28.867 [2024-12-09 05:22:11.174165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:28.867 [2024-12-09 05:22:11.178131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:28.867 [2024-12-09 05:22:11.178206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:28.867 [2024-12-09 05:22:11.178237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:28.867 [2024-12-09 05:22:11.182010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:28.867 [2024-12-09 05:22:11.182042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:28.867 [2024-12-09 05:22:11.182050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:28.867 [2024-12-09 05:22:11.185942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:28.867 [2024-12-09 05:22:11.185973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:28.867 [2024-12-09 05:22:11.185980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:28.867 [2024-12-09 05:22:11.189663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:28.867 [2024-12-09 05:22:11.189695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:28.867 [2024-12-09 05:22:11.189703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:28.867 [2024-12-09 05:22:11.193576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:28.867 [2024-12-09 05:22:11.193620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:28.867 [2024-12-09 05:22:11.193627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:28.867 [2024-12-09 05:22:11.197260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:28.867 [2024-12-09 05:22:11.197340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:28.868 [2024-12-09 05:22:11.197349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:28.868 [2024-12-09 05:22:11.201297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:28.868 [2024-12-09 05:22:11.201386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:28.868 [2024-12-09 05:22:11.201395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:28.868 [2024-12-09 05:22:11.205019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:28.868 [2024-12-09 05:22:11.205099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:28.868 [2024-12-09 05:22:11.205108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:28.868 [2024-12-09 05:22:11.208964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:28.868 [2024-12-09 05:22:11.209002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:28.868 [2024-12-09 05:22:11.209012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:28.868 [2024-12-09 05:22:11.212764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:28.868 [2024-12-09 05:22:11.212796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:28.868 [2024-12-09 05:22:11.212803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:28.868 [2024-12-09 05:22:11.216540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:28.868 [2024-12-09 05:22:11.216587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:28.868 [2024-12-09 05:22:11.216596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:28.868 [2024-12-09 05:22:11.220225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:28.868 [2024-12-09 05:22:11.220294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:28.868 [2024-12-09 05:22:11.220302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:28.868 [2024-12-09 05:22:11.223938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:28.868 [2024-12-09 05:22:11.224032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:28.868 [2024-12-09 05:22:11.224042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:28.868 [2024-12-09 05:22:11.227874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:28.868 [2024-12-09 05:22:11.227906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:28.868 [2024-12-09 05:22:11.227913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:28.868 [2024-12-09 05:22:11.231434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:28.868 [2024-12-09 05:22:11.231464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:28.868 [2024-12-09 05:22:11.231471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:28.868 [2024-12-09 05:22:11.235256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:28.868 [2024-12-09 05:22:11.235320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:28.868 [2024-12-09 05:22:11.235343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:28.868 [2024-12-09 05:22:11.238981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:28.868 [2024-12-09 05:22:11.239041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:28.868 [2024-12-09 05:22:11.239049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:28.868 [2024-12-09 05:22:11.243007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:28.868 [2024-12-09 05:22:11.243038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:28.868 [2024-12-09 05:22:11.243045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:28.868 [2024-12-09 05:22:11.246694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:28.868 [2024-12-09 05:22:11.246723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:28.868 [2024-12-09 05:22:11.246730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:28.868 [2024-12-09 05:22:11.250646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:28.868 [2024-12-09 05:22:11.250676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:28.868 [2024-12-09 05:22:11.250683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:28.868 [2024-12-09 05:22:11.254218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:28.868 [2024-12-09 05:22:11.254287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:28.868 [2024-12-09 05:22:11.254311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:28.868 [2024-12-09 05:22:11.258213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:28.868 [2024-12-09 05:22:11.258244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:28.868 [2024-12-09 05:22:11.258250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:28.868 [2024-12-09 05:22:11.261842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:28.868 [2024-12-09 05:22:11.261870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:28.868 [2024-12-09 05:22:11.261877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:28.868 [2024-12-09 05:22:11.265718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:28.868 [2024-12-09 05:22:11.265749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:28.868 [2024-12-09 05:22:11.265756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:28.868 [2024-12-09 05:22:11.269332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:28.868 [2024-12-09 05:22:11.269360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:28.868 [2024-12-09 05:22:11.269367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:28.868 [2024-12-09 05:22:11.273201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:28.868 [2024-12-09 05:22:11.273291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:28.868 [2024-12-09 05:22:11.273302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:28.868 [2024-12-09 05:22:11.277029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:28.868 [2024-12-09 05:22:11.277106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:28.868 [2024-12-09 05:22:11.277115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:28.868 [2024-12-09 05:22:11.281011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:28.868 [2024-12-09 05:22:11.281048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:28.868 [2024-12-09 05:22:11.281057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:28.868 [2024-12-09 05:22:11.284850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:28.868 [2024-12-09 05:22:11.284882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:28.868 [2024-12-09 05:22:11.284889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:28.868 [2024-12-09 05:22:11.288772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:28.868 [2024-12-09 05:22:11.288807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:28.868 [2024-12-09 05:22:11.288816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:28.868 [2024-12-09 05:22:11.292429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:28.868 [2024-12-09 05:22:11.292459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:28.868 [2024-12-09 05:22:11.292466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:28.868 [2024-12-09 05:22:11.296068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:28.868 [2024-12-09 05:22:11.296159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:28.868 [2024-12-09 05:22:11.296170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:28.868 [2024-12-09 05:22:11.299987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:28.868 [2024-12-09 05:22:11.300019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:28.868 [2024-12-09 05:22:11.300027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:28.868 [2024-12-09 05:22:11.303606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:28.868 [2024-12-09 05:22:11.303637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:28.868 [2024-12-09 05:22:11.303644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:28.868 [2024-12-09 05:22:11.307579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:28.868 [2024-12-09 05:22:11.307611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:28.868 [2024-12-09 05:22:11.307618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:28.868 [2024-12-09 05:22:11.311218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:28.868 [2024-12-09 05:22:11.311294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:28.868 [2024-12-09 05:22:11.311303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:28.868 [2024-12-09 05:22:11.315256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:28.868 [2024-12-09 05:22:11.315285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:28.868 [2024-12-09 05:22:11.315292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:29.128 [2024-12-09 05:22:11.318984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.128 [2024-12-09 05:22:11.319014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.128 [2024-12-09 05:22:11.319021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:29.128 [2024-12-09 05:22:11.322854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.128 [2024-12-09 05:22:11.322884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.128 [2024-12-09 05:22:11.322891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:29.129 [2024-12-09 05:22:11.326566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.129 [2024-12-09 05:22:11.326595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.129 [2024-12-09 05:22:11.326602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:29.129 [2024-12-09 05:22:11.330475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.129 [2024-12-09 05:22:11.330504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.129 [2024-12-09 05:22:11.330512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:29.129 [2024-12-09 05:22:11.334090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.129 [2024-12-09 05:22:11.334172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.129 [2024-12-09 05:22:11.334180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:29.129 [2024-12-09 05:22:11.338126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.129 [2024-12-09 05:22:11.338171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.129 [2024-12-09 05:22:11.338178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:29.129 [2024-12-09 05:22:11.341832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.129 [2024-12-09 05:22:11.341863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.129 [2024-12-09 05:22:11.341869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:29.129 [2024-12-09 05:22:11.345919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.129 [2024-12-09 05:22:11.345964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.129 [2024-12-09 05:22:11.345971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:29.129 [2024-12-09 05:22:11.349675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.129 [2024-12-09 05:22:11.349707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.129 [2024-12-09 05:22:11.349714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:29.129 [2024-12-09 05:22:11.353629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.129 [2024-12-09 05:22:11.353661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.129 [2024-12-09 05:22:11.353668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:29.129 [2024-12-09 05:22:11.357425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.129 [2024-12-09 05:22:11.357455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.129 [2024-12-09 05:22:11.357462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:29.129 [2024-12-09 05:22:11.361463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.129 [2024-12-09 05:22:11.361493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.129 [2024-12-09 05:22:11.361500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:29.129 [2024-12-09 05:22:11.365320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.129 [2024-12-09 05:22:11.365360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.129 [2024-12-09 05:22:11.365367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:29.129 [2024-12-09 05:22:11.369340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.129 [2024-12-09 05:22:11.369377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.129 [2024-12-09 05:22:11.369385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:29.129 [2024-12-09 05:22:11.373171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.129 [2024-12-09 05:22:11.373256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.129 [2024-12-09 05:22:11.373266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:29.129 [2024-12-09 05:22:11.377244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.129 [2024-12-09 05:22:11.377281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.129 [2024-12-09 05:22:11.377290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:29.129 [2024-12-09 05:22:11.381047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.129 [2024-12-09 05:22:11.381078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.129 [2024-12-09 05:22:11.381085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:29.129 [2024-12-09 05:22:11.384872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.129 [2024-12-09 05:22:11.384909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.129 [2024-12-09 05:22:11.384918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:29.129 [2024-12-09 05:22:11.388822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.129 [2024-12-09 05:22:11.388854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.129 [2024-12-09 05:22:11.388861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:29.129 [2024-12-09 05:22:11.392766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.129 [2024-12-09 05:22:11.392812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.129 [2024-12-09 05:22:11.392821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:29.129 [2024-12-09 05:22:11.396705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.129 [2024-12-09 05:22:11.396737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.129 [2024-12-09 05:22:11.396744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:29.129 [2024-12-09 05:22:11.400620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.129 [2024-12-09 05:22:11.400651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.129 [2024-12-09 05:22:11.400658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:29.129 [2024-12-09 05:22:11.404296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.129 [2024-12-09 05:22:11.404386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.129 [2024-12-09 05:22:11.404396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:29.129 [2024-12-09 05:22:11.408408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.129 [2024-12-09 05:22:11.408442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.129 [2024-12-09 05:22:11.408451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:29.129 [2024-12-09 05:22:11.412088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.129 [2024-12-09 05:22:11.412174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.129 [2024-12-09 05:22:11.412184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:29.129 [2024-12-09 05:22:11.416245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.129 [2024-12-09 05:22:11.416281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.129 [2024-12-09 05:22:11.416300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:29.129 [2024-12-09 05:22:11.419929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.129 [2024-12-09 05:22:11.419961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.130 [2024-12-09 05:22:11.419968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:29.130 [2024-12-09 05:22:11.423797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.130 [2024-12-09 05:22:11.423832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.130 [2024-12-09 05:22:11.423841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:29.130 [2024-12-09 05:22:11.427602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.130 [2024-12-09 05:22:11.427633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.130 [2024-12-09 05:22:11.427640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:29.130 [2024-12-09 05:22:11.431405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.130 [2024-12-09 05:22:11.431441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.130 [2024-12-09 05:22:11.431450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:29.130 [2024-12-09 05:22:11.435253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.130 [2024-12-09 05:22:11.435347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.130 [2024-12-09 05:22:11.435358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:29.130 [2024-12-09 05:22:11.439149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.130 [2024-12-09 05:22:11.439245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.130 [2024-12-09 05:22:11.439258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:29.130 [2024-12-09 05:22:11.443057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.130 [2024-12-09 05:22:11.443086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.130 [2024-12-09 05:22:11.443093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:29.130 [2024-12-09 05:22:11.447018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.130 [2024-12-09 05:22:11.447049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.130 [2024-12-09 05:22:11.447055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:29.130 [2024-12-09 05:22:11.450856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.130 [2024-12-09 05:22:11.450884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.130 [2024-12-09 05:22:11.450891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:29.130 [2024-12-09 05:22:11.454661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.130 [2024-12-09 05:22:11.454694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.130 [2024-12-09 05:22:11.454703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:29.130 [2024-12-09 05:22:11.458557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.130 [2024-12-09 05:22:11.458586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.130 [2024-12-09 05:22:11.458592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:29.130 [2024-12-09 05:22:11.462281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.130 [2024-12-09 05:22:11.462387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.130 [2024-12-09 05:22:11.462403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:29.130 [2024-12-09 05:22:11.466202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.130 [2024-12-09 05:22:11.466284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.130 [2024-12-09 05:22:11.466294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:29.130 [2024-12-09 05:22:11.470092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.130 [2024-12-09 05:22:11.470172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.130 [2024-12-09 05:22:11.470182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:29.130 [2024-12-09 05:22:11.474004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.130 [2024-12-09 05:22:11.474036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.130 [2024-12-09 05:22:11.474043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:29.130 [2024-12-09 05:22:11.477881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.130 [2024-12-09 05:22:11.477917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.130 [2024-12-09 05:22:11.477925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:29.130 [2024-12-09 05:22:11.481783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.130 [2024-12-09 05:22:11.481816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.130 [2024-12-09 05:22:11.481823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:29.130 [2024-12-09 05:22:11.485512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.130 [2024-12-09 05:22:11.485548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.130 [2024-12-09 05:22:11.485557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:29.130 [2024-12-09 05:22:11.489280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.130 [2024-12-09 05:22:11.489389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.130 [2024-12-09 05:22:11.489398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:29.130 [2024-12-09 05:22:11.493062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.130 [2024-12-09 05:22:11.493152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.130 [2024-12-09 05:22:11.493162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:29.130 [2024-12-09 05:22:11.496949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.130 [2024-12-09 05:22:11.496981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.130 [2024-12-09 05:22:11.496989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:29.130 [2024-12-09 05:22:11.500677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.130 [2024-12-09 05:22:11.500708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.130 [2024-12-09 05:22:11.500715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:29.130 [2024-12-09 05:22:11.504306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.130 [2024-12-09 05:22:11.504390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.130 [2024-12-09 05:22:11.504399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:29.130 [2024-12-09 05:22:11.507996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.130 [2024-12-09 05:22:11.508063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.130 [2024-12-09 05:22:11.508071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:29.130 [2024-12-09 05:22:11.511698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.130 [2024-12-09 05:22:11.511730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.130 [2024-12-09 05:22:11.511736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:29.130 [2024-12-09 05:22:11.515337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.130 [2024-12-09 05:22:11.515365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.130 [2024-12-09 05:22:11.515372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:29.130 [2024-12-09 05:22:11.518929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.131 [2024-12-09 05:22:11.518992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.131 [2024-12-09 05:22:11.519001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:29.131 [2024-12-09 05:22:11.522633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.131 [2024-12-09 05:22:11.522662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.131 [2024-12-09 05:22:11.522669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:29.131 [2024-12-09 05:22:11.526222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.131 [2024-12-09 05:22:11.526285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.131 [2024-12-09 05:22:11.526294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:29.131 [2024-12-09 05:22:11.530001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.131 [2024-12-09 05:22:11.530031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.131 [2024-12-09 05:22:11.530038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:29.131 [2024-12-09 05:22:11.533678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.131 [2024-12-09 05:22:11.533708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.131 [2024-12-09 05:22:11.533716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:29.131 [2024-12-09 05:22:11.537390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.131 [2024-12-09 05:22:11.537418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.131 [2024-12-09 05:22:11.537425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:29.131 [2024-12-09 05:22:11.540983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.131 [2024-12-09 05:22:11.541065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.131 [2024-12-09 05:22:11.541074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:29.131 [2024-12-09 05:22:11.544708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.131 [2024-12-09 05:22:11.544739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.131 [2024-12-09 05:22:11.544746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:29.131 [2024-12-09 05:22:11.548290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.131 [2024-12-09 05:22:11.548370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.131 [2024-12-09 05:22:11.548379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:29.131 [2024-12-09 05:22:11.551967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.131 [2024-12-09 05:22:11.552032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.131 [2024-12-09 05:22:11.552041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:29.131 [2024-12-09 05:22:11.555731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.131 [2024-12-09 05:22:11.555762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.131 [2024-12-09 05:22:11.555769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:29.131 [2024-12-09 05:22:11.559358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.131 [2024-12-09 05:22:11.559387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.131 [2024-12-09 05:22:11.559394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:29.131 [2024-12-09 05:22:11.562939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.131 [2024-12-09 05:22:11.563003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.131 [2024-12-09 05:22:11.563011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:29.131 [2024-12-09 05:22:11.566647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.131 [2024-12-09 05:22:11.566676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.131 [2024-12-09 05:22:11.566683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:29.131 [2024-12-09 05:22:11.570249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.131 [2024-12-09 05:22:11.570310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.131 [2024-12-09 05:22:11.570319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:29.131 [2024-12-09 05:22:11.573942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.131 [2024-12-09 05:22:11.574005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.131 [2024-12-09 05:22:11.574013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:29.131 [2024-12-09 05:22:11.577660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.131 [2024-12-09 05:22:11.577691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.131 [2024-12-09 05:22:11.577698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:29.390 [2024-12-09 05:22:11.581312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.390 [2024-12-09 05:22:11.581390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.391 [2024-12-09 05:22:11.581399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:29.391 [2024-12-09 05:22:11.585069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.391 [2024-12-09 05:22:11.585149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.391 [2024-12-09 05:22:11.585157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:29.391 [2024-12-09 05:22:11.588896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.391 [2024-12-09 05:22:11.588926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.391 [2024-12-09 05:22:11.588933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:29.391 [2024-12-09 05:22:11.592584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.391 [2024-12-09 05:22:11.592614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.391 [2024-12-09 05:22:11.592621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:29.391 [2024-12-09 05:22:11.596376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.391 [2024-12-09 05:22:11.596406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.391 [2024-12-09 05:22:11.596414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:29.391 [2024-12-09 05:22:11.600103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.391 [2024-12-09 05:22:11.600203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.391 [2024-12-09 05:22:11.600212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:29.391 [2024-12-09 05:22:11.603973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.391 [2024-12-09 05:22:11.604004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.391 [2024-12-09 05:22:11.604012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:29.391 [2024-12-09 05:22:11.607694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.391 [2024-12-09 05:22:11.607724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.391 [2024-12-09 05:22:11.607731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:29.391 [2024-12-09 05:22:11.611406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.391 [2024-12-09 05:22:11.611436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.391 [2024-12-09 05:22:11.611443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:29.391 [2024-12-09 05:22:11.615139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.391 [2024-12-09 05:22:11.615167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.391 [2024-12-09 05:22:11.615175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:29.391 [2024-12-09 05:22:11.618955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.391 [2024-12-09 05:22:11.618985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.391 [2024-12-09 05:22:11.618992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:29.391 [2024-12-09 05:22:11.622732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.391 [2024-12-09 05:22:11.622763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.391 [2024-12-09 05:22:11.622771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:29.391 [2024-12-09 05:22:11.626417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.391 [2024-12-09 05:22:11.626446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.391 [2024-12-09 05:22:11.626453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:29.391 [2024-12-09 05:22:11.630195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.391 [2024-12-09 05:22:11.630265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.391 [2024-12-09 05:22:11.630274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:29.391 [2024-12-09 05:22:11.634067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.391 [2024-12-09 05:22:11.634097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.391 [2024-12-09 05:22:11.634104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:29.391 [2024-12-09 05:22:11.637848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.391 [2024-12-09 05:22:11.637878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.391 [2024-12-09 05:22:11.637885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:29.391 [2024-12-09 05:22:11.641511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.391 [2024-12-09 05:22:11.641541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.391 [2024-12-09 05:22:11.641548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:29.391 [2024-12-09 05:22:11.645171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.391 [2024-12-09 05:22:11.645239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.391 [2024-12-09 05:22:11.645248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:29.391 [2024-12-09 05:22:11.649135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.391 [2024-12-09 05:22:11.649166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.391 [2024-12-09 05:22:11.649173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:29.391 [2024-12-09 05:22:11.652923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.391 [2024-12-09 05:22:11.652953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.391 [2024-12-09 05:22:11.652960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:29.391 [2024-12-09 05:22:11.656723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.391 [2024-12-09 05:22:11.656754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.391 [2024-12-09 05:22:11.656762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:29.391 [2024-12-09 05:22:11.660516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.391 [2024-12-09 05:22:11.660546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.391 [2024-12-09 05:22:11.660553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:29.391 [2024-12-09 05:22:11.664118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.391 [2024-12-09 05:22:11.664185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.391 [2024-12-09 05:22:11.664194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:29.391 [2024-12-09 05:22:11.667886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.391 [2024-12-09 05:22:11.667918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.391 [2024-12-09 05:22:11.667926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:29.391 [2024-12-09 05:22:11.671602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.391 [2024-12-09 05:22:11.671632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.391 [2024-12-09 05:22:11.671639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:29.391 [2024-12-09 05:22:11.675351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.391 [2024-12-09 05:22:11.675380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.391 [2024-12-09 05:22:11.675387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:29.391 [2024-12-09 05:22:11.678988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.391 [2024-12-09 05:22:11.679060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.391 [2024-12-09 05:22:11.679069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:29.391 [2024-12-09 05:22:11.682849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.392 [2024-12-09 05:22:11.682879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.392 [2024-12-09 05:22:11.682887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:29.392 [2024-12-09 05:22:11.686486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.392 [2024-12-09 05:22:11.686515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.392 [2024-12-09 05:22:11.686522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:29.392 [2024-12-09 05:22:11.690136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.392 [2024-12-09 05:22:11.690200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.392 [2024-12-09 05:22:11.690208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:29.392 [2024-12-09 05:22:11.693976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.392 [2024-12-09 05:22:11.694006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.392 [2024-12-09 05:22:11.694013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:29.392 [2024-12-09 05:22:11.697706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.392 [2024-12-09 05:22:11.697736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.392 [2024-12-09 05:22:11.697743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:29.392 [2024-12-09 05:22:11.701739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.392 [2024-12-09 05:22:11.701771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.392 [2024-12-09 05:22:11.701779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:29.392 [2024-12-09 05:22:11.705555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.392 [2024-12-09 05:22:11.705586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.392 [2024-12-09 05:22:11.705593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:29.392 [2024-12-09 05:22:11.709389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.392 [2024-12-09 05:22:11.709420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.392 [2024-12-09 05:22:11.709428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:29.392 [2024-12-09 05:22:11.713090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.392 [2024-12-09 05:22:11.713156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.392 [2024-12-09 05:22:11.713165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:29.392 [2024-12-09 05:22:11.716969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.392 [2024-12-09 05:22:11.717002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.392 [2024-12-09 05:22:11.717010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:29.392 [2024-12-09 05:22:11.720743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.392 [2024-12-09 05:22:11.720775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.392 [2024-12-09 05:22:11.720782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:29.392 [2024-12-09 05:22:11.724455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.392 [2024-12-09 05:22:11.724484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.392 [2024-12-09 05:22:11.724491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:29.392 [2024-12-09 05:22:11.728302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.392 [2024-12-09 05:22:11.728389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.392 [2024-12-09 05:22:11.728399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:29.392 [2024-12-09 05:22:11.732095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.392 [2024-12-09 05:22:11.732163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.392 [2024-12-09 05:22:11.732172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:29.392 [2024-12-09 05:22:11.735839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.392 [2024-12-09 05:22:11.735871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.392 [2024-12-09 05:22:11.735879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:29.392 [2024-12-09 05:22:11.739541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.392 [2024-12-09 05:22:11.739572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.392 [2024-12-09 05:22:11.739579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:29.392 [2024-12-09 05:22:11.743224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.392 [2024-12-09 05:22:11.743284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.392 [2024-12-09 05:22:11.743308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:29.392 [2024-12-09 05:22:11.746909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.392 [2024-12-09 05:22:11.746970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.392 [2024-12-09 05:22:11.746978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:29.392 [2024-12-09 05:22:11.750625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.392 [2024-12-09 05:22:11.750653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.392 [2024-12-09 05:22:11.750661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:29.392 [2024-12-09 05:22:11.754222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.392 [2024-12-09 05:22:11.754282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.392 [2024-12-09 05:22:11.754291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:29.392 [2024-12-09 05:22:11.757959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.392 [2024-12-09 05:22:11.758018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.392 [2024-12-09 05:22:11.758042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:29.392 [2024-12-09 05:22:11.761716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.392 [2024-12-09 05:22:11.761745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.392 [2024-12-09 05:22:11.761752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:29.392 [2024-12-09 05:22:11.765362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.392 [2024-12-09 05:22:11.765389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.392 [2024-12-09 05:22:11.765396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:29.392 [2024-12-09 05:22:11.768993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.392 [2024-12-09 05:22:11.769073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.392 [2024-12-09 05:22:11.769082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:29.392 [2024-12-09 05:22:11.772685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.392 [2024-12-09 05:22:11.772715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.392 [2024-12-09 05:22:11.772723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:29.392 [2024-12-09 05:22:11.776259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.392 [2024-12-09 05:22:11.776340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.392 [2024-12-09 05:22:11.776348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:29.392 [2024-12-09 05:22:11.779972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.392 [2024-12-09 05:22:11.780039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.392 [2024-12-09 05:22:11.780048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:29.392 [2024-12-09 05:22:11.783624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.393 [2024-12-09 05:22:11.783654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.393 [2024-12-09 05:22:11.783662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:29.393 [2024-12-09 05:22:11.787283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.393 [2024-12-09 05:22:11.787361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.393 [2024-12-09 05:22:11.787370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:29.393 [2024-12-09 05:22:11.790986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.393 [2024-12-09 05:22:11.791047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.393 [2024-12-09 05:22:11.791071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:29.393 [2024-12-09 05:22:11.794737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.393 [2024-12-09 05:22:11.794765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.393 [2024-12-09 05:22:11.794772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:29.393 [2024-12-09 05:22:11.798377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.393 [2024-12-09 05:22:11.798404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.393 [2024-12-09 05:22:11.798411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:29.393 [2024-12-09 05:22:11.801956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.393 [2024-12-09 05:22:11.802037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.393 [2024-12-09 05:22:11.802046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:29.393 [2024-12-09 05:22:11.805761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.393 [2024-12-09 05:22:11.805791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.393 [2024-12-09 05:22:11.805798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:29.393 [2024-12-09 05:22:11.809433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.393 [2024-12-09 05:22:11.809463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.393 [2024-12-09 05:22:11.809471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:29.393 [2024-12-09 05:22:11.813066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.393 [2024-12-09 05:22:11.813133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.393 [2024-12-09 05:22:11.813141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:29.393 [2024-12-09 05:22:11.816800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.393 [2024-12-09 05:22:11.816830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.393 [2024-12-09 05:22:11.816837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:29.393 [2024-12-09 05:22:11.820395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.393 [2024-12-09 05:22:11.820427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.393 [2024-12-09 05:22:11.820434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:29.393 [2024-12-09 05:22:11.823928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.393 [2024-12-09 05:22:11.823994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.393 [2024-12-09 05:22:11.824003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:29.393 [2024-12-09 05:22:11.827685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.393 [2024-12-09 05:22:11.827716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.393 [2024-12-09 05:22:11.827722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:29.393 [2024-12-09 05:22:11.831288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.393 [2024-12-09 05:22:11.831360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.393 [2024-12-09 05:22:11.831369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:29.393 [2024-12-09 05:22:11.834981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.393 [2024-12-09 05:22:11.835040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.393 [2024-12-09 05:22:11.835064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:29.393 [2024-12-09 05:22:11.838716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.393 [2024-12-09 05:22:11.838744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.393 [2024-12-09 05:22:11.838751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:29.393 [2024-12-09 05:22:11.842382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.393 [2024-12-09 05:22:11.842410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.393 [2024-12-09 05:22:11.842417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:29.653 [2024-12-09 05:22:11.845982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.653 [2024-12-09 05:22:11.846060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.653 [2024-12-09 05:22:11.846069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:29.653 [2024-12-09 05:22:11.849744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.653 [2024-12-09 05:22:11.849773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.653 [2024-12-09 05:22:11.849780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:29.653 [2024-12-09 05:22:11.853385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.653 [2024-12-09 05:22:11.853412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.653 [2024-12-09 05:22:11.853418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:29.653 [2024-12-09 05:22:11.857022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.653 [2024-12-09 05:22:11.857087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.653 [2024-12-09 05:22:11.857095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:29.653 [2024-12-09 05:22:11.860820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.653 [2024-12-09 05:22:11.860851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.653 [2024-12-09 05:22:11.860858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:29.653 [2024-12-09 05:22:11.864420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.653 [2024-12-09 05:22:11.864450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.653 [2024-12-09 05:22:11.864457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:29.653 [2024-12-09 05:22:11.867964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.653 [2024-12-09 05:22:11.868033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.653 [2024-12-09 05:22:11.868041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:29.653 [2024-12-09 05:22:11.871747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.653 [2024-12-09 05:22:11.871778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.653 [2024-12-09 05:22:11.871785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:29.653 [2024-12-09 05:22:11.875437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.653 [2024-12-09 05:22:11.875467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.653 [2024-12-09 05:22:11.875474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:29.653 [2024-12-09 05:22:11.879038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.653 [2024-12-09 05:22:11.879098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.653 [2024-12-09 05:22:11.879123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:29.653 [2024-12-09 05:22:11.882784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.653 [2024-12-09 05:22:11.882813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.653 [2024-12-09 05:22:11.882820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:29.653 [2024-12-09 05:22:11.886438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.653 [2024-12-09 05:22:11.886466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.653 [2024-12-09 05:22:11.886473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:29.653 [2024-12-09 05:22:11.890063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.653 [2024-12-09 05:22:11.890144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.653 [2024-12-09 05:22:11.890152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:29.653 [2024-12-09 05:22:11.893796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.653 [2024-12-09 05:22:11.893828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.653 [2024-12-09 05:22:11.893836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:29.653 [2024-12-09 05:22:11.897491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.653 [2024-12-09 05:22:11.897521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.654 [2024-12-09 05:22:11.897528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:29.654 [2024-12-09 05:22:11.901206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.654 [2024-12-09 05:22:11.901272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.654 [2024-12-09 05:22:11.901280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:29.654 [2024-12-09 05:22:11.904998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.654 [2024-12-09 05:22:11.905079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.654 [2024-12-09 05:22:11.905088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:29.654 [2024-12-09 05:22:11.908929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.654 [2024-12-09 05:22:11.908964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.654 [2024-12-09 05:22:11.908971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:29.654 [2024-12-09 05:22:11.912716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.654 [2024-12-09 05:22:11.912751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.654 [2024-12-09 05:22:11.912759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:29.654 [2024-12-09 05:22:11.916698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.654 [2024-12-09 05:22:11.916732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.654 [2024-12-09 05:22:11.916740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:29.654 [2024-12-09 05:22:11.920562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.654 [2024-12-09 05:22:11.920597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.654 [2024-12-09 05:22:11.920606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:29.654 [2024-12-09 05:22:11.924335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.654 [2024-12-09 05:22:11.924366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.654 [2024-12-09 05:22:11.924373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:29.654 [2024-12-09 05:22:11.928047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.654 [2024-12-09 05:22:11.928136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.654 [2024-12-09 05:22:11.928146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:29.654 [2024-12-09 05:22:11.931859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.654 [2024-12-09 05:22:11.931893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.654 [2024-12-09 05:22:11.931901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:29.654 [2024-12-09 05:22:11.935558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.654 [2024-12-09 05:22:11.935590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.654 [2024-12-09 05:22:11.935597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:29.654 [2024-12-09 05:22:11.939234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.654 [2024-12-09 05:22:11.939311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.654 [2024-12-09 05:22:11.939321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:29.654 [2024-12-09 05:22:11.942966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.654 [2024-12-09 05:22:11.943027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.654 [2024-12-09 05:22:11.943051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:29.654 [2024-12-09 05:22:11.946699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.654 [2024-12-09 05:22:11.946727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.654 [2024-12-09 05:22:11.946735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:29.654 [2024-12-09 05:22:11.950368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.654 [2024-12-09 05:22:11.950394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.654 [2024-12-09 05:22:11.950402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:29.654 [2024-12-09 05:22:11.954018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.654 [2024-12-09 05:22:11.954089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.654 [2024-12-09 05:22:11.954098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:29.654 [2024-12-09 05:22:11.957779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.654 [2024-12-09 05:22:11.957810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.654 [2024-12-09 05:22:11.957817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:29.654 [2024-12-09 05:22:11.961459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.654 [2024-12-09 05:22:11.961490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.654 [2024-12-09 05:22:11.961497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:29.654 [2024-12-09 05:22:11.965109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.654 [2024-12-09 05:22:11.965186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.654 [2024-12-09 05:22:11.965195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:29.654 [2024-12-09 05:22:11.968860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.654 [2024-12-09 05:22:11.968892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.654 [2024-12-09 05:22:11.968899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:29.654 [2024-12-09 05:22:11.972481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.654 [2024-12-09 05:22:11.972511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.654 [2024-12-09 05:22:11.972519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:29.654 [2024-12-09 05:22:11.976039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.654 [2024-12-09 05:22:11.976107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.654 [2024-12-09 05:22:11.976115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:29.654 [2024-12-09 05:22:11.979815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.654 [2024-12-09 05:22:11.979848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.654 [2024-12-09 05:22:11.979855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:29.654 [2024-12-09 05:22:11.983860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.654 [2024-12-09 05:22:11.983896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.654 [2024-12-09 05:22:11.983904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:29.654 [2024-12-09 05:22:11.987521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.654 [2024-12-09 05:22:11.987553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.654 [2024-12-09 05:22:11.987561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:29.654 [2024-12-09 05:22:11.991152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.654 [2024-12-09 05:22:11.991225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.654 [2024-12-09 05:22:11.991233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:29.654 [2024-12-09 05:22:11.994902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.654 [2024-12-09 05:22:11.994961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.654 [2024-12-09 05:22:11.994986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:29.654 [2024-12-09 05:22:11.999126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.654 [2024-12-09 05:22:11.999162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.655 [2024-12-09 05:22:11.999170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:29.655 [2024-12-09 05:22:12.002973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.655 [2024-12-09 05:22:12.003006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.655 [2024-12-09 05:22:12.003013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:29.655 [2024-12-09 05:22:12.006683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.655 [2024-12-09 05:22:12.006711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.655 [2024-12-09 05:22:12.006718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:29.655 [2024-12-09 05:22:12.010386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.655 [2024-12-09 05:22:12.010413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.655 [2024-12-09 05:22:12.010419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:29.655 [2024-12-09 05:22:12.014035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.655 [2024-12-09 05:22:12.014107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.655 [2024-12-09 05:22:12.014115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:29.655 [2024-12-09 05:22:12.017848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.655 [2024-12-09 05:22:12.017881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.655 [2024-12-09 05:22:12.017888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:29.655 [2024-12-09 05:22:12.021546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.655 [2024-12-09 05:22:12.021576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.655 [2024-12-09 05:22:12.021583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:29.655 [2024-12-09 05:22:12.025189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.655 [2024-12-09 05:22:12.025272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.655 [2024-12-09 05:22:12.025281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:29.655 [2024-12-09 05:22:12.028942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.655 [2024-12-09 05:22:12.029024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.655 [2024-12-09 05:22:12.029032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:29.655 [2024-12-09 05:22:12.032798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.655 [2024-12-09 05:22:12.032829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.655 [2024-12-09 05:22:12.032836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:29.655 [2024-12-09 05:22:12.036398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.655 [2024-12-09 05:22:12.036430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.655 [2024-12-09 05:22:12.036437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:29.655 [2024-12-09 05:22:12.040074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.655 [2024-12-09 05:22:12.040162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.655 [2024-12-09 05:22:12.040172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:29.655 [2024-12-09 05:22:12.043845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.655 [2024-12-09 05:22:12.043877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.655 [2024-12-09 05:22:12.043884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:29.655 [2024-12-09 05:22:12.047434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.655 [2024-12-09 05:22:12.047464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.655 [2024-12-09 05:22:12.047471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:29.655 [2024-12-09 05:22:12.050962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.655 [2024-12-09 05:22:12.051025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.655 [2024-12-09 05:22:12.051033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:29.655 8122.00 IOPS, 1015.25 MiB/s [2024-12-09T05:22:12.111Z] [2024-12-09 05:22:12.056062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.655 [2024-12-09 05:22:12.056097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.655 [2024-12-09 05:22:12.056105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:29.655 [2024-12-09 05:22:12.059872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.655 [2024-12-09 05:22:12.059908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.655 [2024-12-09 05:22:12.059917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:29.655 [2024-12-09 05:22:12.063704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.655 [2024-12-09 05:22:12.063734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.655 [2024-12-09 05:22:12.063741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:29.655 [2024-12-09 05:22:12.067381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.655 [2024-12-09 05:22:12.067412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.655 [2024-12-09 05:22:12.067419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:29.655 [2024-12-09 05:22:12.071065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.655 [2024-12-09 05:22:12.071096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.655 [2024-12-09 05:22:12.071104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:29.655 [2024-12-09 05:22:12.074733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.655 [2024-12-09 05:22:12.074764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.655 [2024-12-09 05:22:12.074771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:29.655 [2024-12-09 05:22:12.078432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.655 [2024-12-09 05:22:12.078463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.655 [2024-12-09 05:22:12.078470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:29.655 [2024-12-09 05:22:12.082075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.655 [2024-12-09 05:22:12.082143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.655 [2024-12-09 05:22:12.082151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:29.655 [2024-12-09 05:22:12.085934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.655 [2024-12-09 05:22:12.085965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.655 [2024-12-09 05:22:12.085972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:29.655 [2024-12-09 05:22:12.089574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.655 [2024-12-09 05:22:12.089604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.655 [2024-12-09 05:22:12.089611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:29.655 [2024-12-09 05:22:12.093245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.655 [2024-12-09 05:22:12.093331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.655 [2024-12-09 05:22:12.093355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:29.655 [2024-12-09 05:22:12.097063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.655 [2024-12-09 05:22:12.097139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.655 [2024-12-09 05:22:12.097171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:29.655 [2024-12-09 05:22:12.100830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.655 [2024-12-09 05:22:12.100921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.656 [2024-12-09 05:22:12.100955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:29.656 [2024-12-09 05:22:12.104620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.656 [2024-12-09 05:22:12.104692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.656 [2024-12-09 05:22:12.104724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:29.916 [2024-12-09 05:22:12.108307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.917 [2024-12-09 05:22:12.108412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.917 [2024-12-09 05:22:12.108459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:29.917 [2024-12-09 05:22:12.112069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.917 [2024-12-09 05:22:12.112142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.917 [2024-12-09 05:22:12.112173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:29.917 [2024-12-09 05:22:12.115850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.917 [2024-12-09 05:22:12.115922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.917 [2024-12-09 05:22:12.115960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:29.917 [2024-12-09 05:22:12.120427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.917 [2024-12-09 05:22:12.120540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.917 [2024-12-09 05:22:12.120598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:29.917 [2024-12-09 05:22:12.124565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.917 [2024-12-09 05:22:12.124643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.917 [2024-12-09 05:22:12.124675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:29.917 [2024-12-09 05:22:12.128374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.917 [2024-12-09 05:22:12.128475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.917 [2024-12-09 05:22:12.128507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:29.917 [2024-12-09 05:22:12.132165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.917 [2024-12-09 05:22:12.132260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.917 [2024-12-09 05:22:12.132297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:29.917 [2024-12-09 05:22:12.135927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.917 [2024-12-09 05:22:12.136016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.917 [2024-12-09 05:22:12.136050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:29.917 [2024-12-09 05:22:12.139691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.917 [2024-12-09 05:22:12.139784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.917 [2024-12-09 05:22:12.139818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:29.917 [2024-12-09 05:22:12.143812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.917 [2024-12-09 05:22:12.143889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.917 [2024-12-09 05:22:12.143962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:29.917 [2024-12-09 05:22:12.147761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.917 [2024-12-09 05:22:12.147794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.917 [2024-12-09 05:22:12.147802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:29.917 [2024-12-09 05:22:12.151654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.917 [2024-12-09 05:22:12.151690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.917 [2024-12-09 05:22:12.151699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:29.917 [2024-12-09 05:22:12.155553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.917 [2024-12-09 05:22:12.155585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.917 [2024-12-09 05:22:12.155593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:29.917 [2024-12-09 05:22:12.159386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.917 [2024-12-09 05:22:12.159426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.917 [2024-12-09 05:22:12.159433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:29.917 [2024-12-09 05:22:12.163360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.917 [2024-12-09 05:22:12.163393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.917 [2024-12-09 05:22:12.163401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:29.917 [2024-12-09 05:22:12.167155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.917 [2024-12-09 05:22:12.167228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.917 [2024-12-09 05:22:12.167237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:29.917 [2024-12-09 05:22:12.171197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.917 [2024-12-09 05:22:12.171235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.917 [2024-12-09 05:22:12.171242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:29.917 [2024-12-09 05:22:12.175122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.917 [2024-12-09 05:22:12.175152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.917 [2024-12-09 05:22:12.175160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:29.917 [2024-12-09 05:22:12.179066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.917 [2024-12-09 05:22:12.179096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.917 [2024-12-09 05:22:12.179104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:29.917 [2024-12-09 05:22:12.182906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.917 [2024-12-09 05:22:12.182988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.917 [2024-12-09 05:22:12.183053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:29.917 [2024-12-09 05:22:12.186907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.917 [2024-12-09 05:22:12.186986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.917 [2024-12-09 05:22:12.187062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:29.917 [2024-12-09 05:22:12.190773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.917 [2024-12-09 05:22:12.190833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.917 [2024-12-09 05:22:12.190842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:29.917 [2024-12-09 05:22:12.194509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.917 [2024-12-09 05:22:12.194537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.917 [2024-12-09 05:22:12.194544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:29.917 [2024-12-09 05:22:12.198130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.917 [2024-12-09 05:22:12.198191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.917 [2024-12-09 05:22:12.198215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:29.917 [2024-12-09 05:22:12.201838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.917 [2024-12-09 05:22:12.201868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.917 [2024-12-09 05:22:12.201875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:29.917 [2024-12-09 05:22:12.205529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.917 [2024-12-09 05:22:12.205558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.917 [2024-12-09 05:22:12.205565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:29.917 [2024-12-09 05:22:12.209147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.918 [2024-12-09 05:22:12.209230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.918 [2024-12-09 05:22:12.209238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:29.918 [2024-12-09 05:22:12.212910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.918 [2024-12-09 05:22:12.212942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.918 [2024-12-09 05:22:12.212948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:29.918 [2024-12-09 05:22:12.216558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.918 [2024-12-09 05:22:12.216589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.918 [2024-12-09 05:22:12.216597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:29.918 [2024-12-09 05:22:12.220180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.918 [2024-12-09 05:22:12.220251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.918 [2024-12-09 05:22:12.220259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:29.918 [2024-12-09 05:22:12.223931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.918 [2024-12-09 05:22:12.223996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.918 [2024-12-09 05:22:12.224005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:29.918 [2024-12-09 05:22:12.227587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.918 [2024-12-09 05:22:12.227619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.918 [2024-12-09 05:22:12.227626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:29.918 [2024-12-09 05:22:12.231196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.918 [2024-12-09 05:22:12.231266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.918 [2024-12-09 05:22:12.231274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:29.918 [2024-12-09 05:22:12.234900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.918 [2024-12-09 05:22:12.234960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.918 [2024-12-09 05:22:12.234984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:29.918 [2024-12-09 05:22:12.238602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.918 [2024-12-09 05:22:12.238631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.918 [2024-12-09 05:22:12.238638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:29.918 [2024-12-09 05:22:12.242243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.918 [2024-12-09 05:22:12.242304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.918 [2024-12-09 05:22:12.242313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:29.918 [2024-12-09 05:22:12.245946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.918 [2024-12-09 05:22:12.246024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.918 [2024-12-09 05:22:12.246033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:29.918 [2024-12-09 05:22:12.250386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.918 [2024-12-09 05:22:12.250421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.918 [2024-12-09 05:22:12.250429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:29.918 [2024-12-09 05:22:12.254030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.918 [2024-12-09 05:22:12.254062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.918 [2024-12-09 05:22:12.254069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:29.918 [2024-12-09 05:22:12.257573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.918 [2024-12-09 05:22:12.257603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.918 [2024-12-09 05:22:12.257610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:29.918 [2024-12-09 05:22:12.261201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.918 [2024-12-09 05:22:12.261270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.918 [2024-12-09 05:22:12.261279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:29.918 [2024-12-09 05:22:12.265066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.918 [2024-12-09 05:22:12.265146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.918 [2024-12-09 05:22:12.265154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:29.918 [2024-12-09 05:22:12.268896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.918 [2024-12-09 05:22:12.268926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.918 [2024-12-09 05:22:12.268933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:29.918 [2024-12-09 05:22:12.272555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.918 [2024-12-09 05:22:12.272586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.918 [2024-12-09 05:22:12.272593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:29.918 [2024-12-09 05:22:12.276318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.918 [2024-12-09 05:22:12.276403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.918 [2024-12-09 05:22:12.276412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:29.918 [2024-12-09 05:22:12.279962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.918 [2024-12-09 05:22:12.280029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.918 [2024-12-09 05:22:12.280040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:29.918 [2024-12-09 05:22:12.283737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.918 [2024-12-09 05:22:12.283826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.918 [2024-12-09 05:22:12.283881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:29.918 [2024-12-09 05:22:12.287645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.918 [2024-12-09 05:22:12.287744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.918 [2024-12-09 05:22:12.287792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:29.918 [2024-12-09 05:22:12.291443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.918 [2024-12-09 05:22:12.291520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.918 [2024-12-09 05:22:12.291562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:29.918 [2024-12-09 05:22:12.295253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.918 [2024-12-09 05:22:12.295331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.918 [2024-12-09 05:22:12.295382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:29.918 [2024-12-09 05:22:12.298968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.918 [2024-12-09 05:22:12.299053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.918 [2024-12-09 05:22:12.299087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:29.918 [2024-12-09 05:22:12.302996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.918 [2024-12-09 05:22:12.303068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.918 [2024-12-09 05:22:12.303099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:29.918 [2024-12-09 05:22:12.306825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.918 [2024-12-09 05:22:12.306893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.918 [2024-12-09 05:22:12.306925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:29.918 [2024-12-09 05:22:12.310630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.919 [2024-12-09 05:22:12.310702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.919 [2024-12-09 05:22:12.310734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:29.919 [2024-12-09 05:22:12.314412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.919 [2024-12-09 05:22:12.314498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.919 [2024-12-09 05:22:12.314531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:29.919 [2024-12-09 05:22:12.318107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.919 [2024-12-09 05:22:12.318195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.919 [2024-12-09 05:22:12.318227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:29.919 [2024-12-09 05:22:12.321866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.919 [2024-12-09 05:22:12.321955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.919 [2024-12-09 05:22:12.321986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:29.919 [2024-12-09 05:22:12.325629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.919 [2024-12-09 05:22:12.325701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.919 [2024-12-09 05:22:12.325732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:29.919 [2024-12-09 05:22:12.329284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.919 [2024-12-09 05:22:12.329384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.919 [2024-12-09 05:22:12.329418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:29.919 [2024-12-09 05:22:12.333078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.919 [2024-12-09 05:22:12.333166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.919 [2024-12-09 05:22:12.333197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:29.919 [2024-12-09 05:22:12.336855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.919 [2024-12-09 05:22:12.336943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.919 [2024-12-09 05:22:12.336975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:29.919 [2024-12-09 05:22:12.340619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.919 [2024-12-09 05:22:12.340707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.919 [2024-12-09 05:22:12.340738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:29.919 [2024-12-09 05:22:12.344351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.919 [2024-12-09 05:22:12.344423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.919 [2024-12-09 05:22:12.344454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:29.919 [2024-12-09 05:22:12.348037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.919 [2024-12-09 05:22:12.348109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.919 [2024-12-09 05:22:12.348156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:29.919 [2024-12-09 05:22:12.351743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.919 [2024-12-09 05:22:12.351816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.919 [2024-12-09 05:22:12.351847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:29.919 [2024-12-09 05:22:12.355429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.919 [2024-12-09 05:22:12.355501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.919 [2024-12-09 05:22:12.355538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:29.919 [2024-12-09 05:22:12.359138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.919 [2024-12-09 05:22:12.359225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.919 [2024-12-09 05:22:12.359257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:29.919 [2024-12-09 05:22:12.362902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.919 [2024-12-09 05:22:12.362987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.919 [2024-12-09 05:22:12.363021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:29.919 [2024-12-09 05:22:12.366802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:29.919 [2024-12-09 05:22:12.366872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:29.919 [2024-12-09 05:22:12.366904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:30.180 [2024-12-09 05:22:12.370522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.180 [2024-12-09 05:22:12.370608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.180 [2024-12-09 05:22:12.370640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:30.180 [2024-12-09 05:22:12.374369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.180 [2024-12-09 05:22:12.374481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.180 [2024-12-09 05:22:12.374514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:30.180 [2024-12-09 05:22:12.378141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.180 [2024-12-09 05:22:12.378246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.180 [2024-12-09 05:22:12.378277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:30.180 [2024-12-09 05:22:12.381934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.180 [2024-12-09 05:22:12.382020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.180 [2024-12-09 05:22:12.382052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:30.180 [2024-12-09 05:22:12.386367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.181 [2024-12-09 05:22:12.386454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.181 [2024-12-09 05:22:12.386486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:30.181 [2024-12-09 05:22:12.390169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.181 [2024-12-09 05:22:12.390243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.181 [2024-12-09 05:22:12.390274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:30.181 [2024-12-09 05:22:12.393951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.181 [2024-12-09 05:22:12.394042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.181 [2024-12-09 05:22:12.394074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:30.181 [2024-12-09 05:22:12.397738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.181 [2024-12-09 05:22:12.397826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.181 [2024-12-09 05:22:12.397857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:30.181 [2024-12-09 05:22:12.401560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.181 [2024-12-09 05:22:12.401636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.181 [2024-12-09 05:22:12.401702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:30.181 [2024-12-09 05:22:12.405341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.181 [2024-12-09 05:22:12.405438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.181 [2024-12-09 05:22:12.405469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:30.181 [2024-12-09 05:22:12.409087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.181 [2024-12-09 05:22:12.409160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.181 [2024-12-09 05:22:12.409191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:30.181 [2024-12-09 05:22:12.412949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.181 [2024-12-09 05:22:12.413040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.181 [2024-12-09 05:22:12.413072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:30.181 [2024-12-09 05:22:12.416707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.181 [2024-12-09 05:22:12.416797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.181 [2024-12-09 05:22:12.416828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:30.181 [2024-12-09 05:22:12.420533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.181 [2024-12-09 05:22:12.420623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.181 [2024-12-09 05:22:12.420656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:30.181 [2024-12-09 05:22:12.424228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.181 [2024-12-09 05:22:12.424302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.181 [2024-12-09 05:22:12.424352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:30.181 [2024-12-09 05:22:12.428072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.181 [2024-12-09 05:22:12.428136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.181 [2024-12-09 05:22:12.428145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:30.181 [2024-12-09 05:22:12.431915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.181 [2024-12-09 05:22:12.431946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.181 [2024-12-09 05:22:12.431954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:30.181 [2024-12-09 05:22:12.435582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.181 [2024-12-09 05:22:12.435615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.181 [2024-12-09 05:22:12.435623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:30.181 [2024-12-09 05:22:12.439308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.181 [2024-12-09 05:22:12.439387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.181 [2024-12-09 05:22:12.439396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:30.181 [2024-12-09 05:22:12.443043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.181 [2024-12-09 05:22:12.443112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.181 [2024-12-09 05:22:12.443121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:30.181 [2024-12-09 05:22:12.446934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.181 [2024-12-09 05:22:12.446964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.181 [2024-12-09 05:22:12.446971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:30.181 [2024-12-09 05:22:12.450628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.181 [2024-12-09 05:22:12.450656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.181 [2024-12-09 05:22:12.450663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:30.181 [2024-12-09 05:22:12.454379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.181 [2024-12-09 05:22:12.454406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.181 [2024-12-09 05:22:12.454414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:30.181 [2024-12-09 05:22:12.458021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.181 [2024-12-09 05:22:12.458113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.181 [2024-12-09 05:22:12.458122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:30.181 [2024-12-09 05:22:12.461860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.181 [2024-12-09 05:22:12.461893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.181 [2024-12-09 05:22:12.461900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:30.181 [2024-12-09 05:22:12.465736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.181 [2024-12-09 05:22:12.465767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.181 [2024-12-09 05:22:12.465775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:30.181 [2024-12-09 05:22:12.469355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.181 [2024-12-09 05:22:12.469383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.181 [2024-12-09 05:22:12.469390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:30.181 [2024-12-09 05:22:12.472947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.181 [2024-12-09 05:22:12.473017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.181 [2024-12-09 05:22:12.473025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:30.181 [2024-12-09 05:22:12.476683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.181 [2024-12-09 05:22:12.476714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.181 [2024-12-09 05:22:12.476721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:30.181 [2024-12-09 05:22:12.480416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.181 [2024-12-09 05:22:12.480447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.181 [2024-12-09 05:22:12.480455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:30.181 [2024-12-09 05:22:12.484108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.181 [2024-12-09 05:22:12.484177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.182 [2024-12-09 05:22:12.484190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:30.182 [2024-12-09 05:22:12.487861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.182 [2024-12-09 05:22:12.487892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.182 [2024-12-09 05:22:12.487899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:30.182 [2024-12-09 05:22:12.491588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.182 [2024-12-09 05:22:12.491620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.182 [2024-12-09 05:22:12.491628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:30.182 [2024-12-09 05:22:12.495231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.182 [2024-12-09 05:22:12.495298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.182 [2024-12-09 05:22:12.495307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:30.182 [2024-12-09 05:22:12.499103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.182 [2024-12-09 05:22:12.499185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.182 [2024-12-09 05:22:12.499194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:30.182 [2024-12-09 05:22:12.503216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.182 [2024-12-09 05:22:12.503257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.182 [2024-12-09 05:22:12.503268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:30.182 [2024-12-09 05:22:12.507271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.182 [2024-12-09 05:22:12.507304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.182 [2024-12-09 05:22:12.507311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:30.182 [2024-12-09 05:22:12.511052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.182 [2024-12-09 05:22:12.511081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.182 [2024-12-09 05:22:12.511088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:30.182 [2024-12-09 05:22:12.514788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.182 [2024-12-09 05:22:12.514816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.182 [2024-12-09 05:22:12.514823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:30.182 [2024-12-09 05:22:12.518743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.182 [2024-12-09 05:22:12.518772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.182 [2024-12-09 05:22:12.518779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:30.182 [2024-12-09 05:22:12.522398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.182 [2024-12-09 05:22:12.522425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.182 [2024-12-09 05:22:12.522432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:30.182 [2024-12-09 05:22:12.526203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.182 [2024-12-09 05:22:12.526274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.182 [2024-12-09 05:22:12.526283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:30.182 [2024-12-09 05:22:12.530010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.182 [2024-12-09 05:22:12.530078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.182 [2024-12-09 05:22:12.530087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:30.182 [2024-12-09 05:22:12.533913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.182 [2024-12-09 05:22:12.533946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.182 [2024-12-09 05:22:12.533954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:30.182 [2024-12-09 05:22:12.537639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.182 [2024-12-09 05:22:12.537671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.182 [2024-12-09 05:22:12.537678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:30.182 [2024-12-09 05:22:12.541319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.182 [2024-12-09 05:22:12.541358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.182 [2024-12-09 05:22:12.541366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:30.182 [2024-12-09 05:22:12.545191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.182 [2024-12-09 05:22:12.545259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.182 [2024-12-09 05:22:12.545268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:30.182 [2024-12-09 05:22:12.549102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.182 [2024-12-09 05:22:12.549134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.182 [2024-12-09 05:22:12.549141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:30.182 [2024-12-09 05:22:12.552888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.182 [2024-12-09 05:22:12.552919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.182 [2024-12-09 05:22:12.552926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:30.182 [2024-12-09 05:22:12.556600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.182 [2024-12-09 05:22:12.556630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.182 [2024-12-09 05:22:12.556637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:30.182 [2024-12-09 05:22:12.560287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.182 [2024-12-09 05:22:12.560372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.182 [2024-12-09 05:22:12.560381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:30.182 [2024-12-09 05:22:12.564060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.182 [2024-12-09 05:22:12.564139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.182 [2024-12-09 05:22:12.564149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:30.182 [2024-12-09 05:22:12.568025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.182 [2024-12-09 05:22:12.568058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.182 [2024-12-09 05:22:12.568066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:30.182 [2024-12-09 05:22:12.571769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.182 [2024-12-09 05:22:12.571801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.182 [2024-12-09 05:22:12.571809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:30.182 [2024-12-09 05:22:12.575513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.182 [2024-12-09 05:22:12.575543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.182 [2024-12-09 05:22:12.575551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:30.182 [2024-12-09 05:22:12.579316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.182 [2024-12-09 05:22:12.579353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.182 [2024-12-09 05:22:12.579360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:30.182 [2024-12-09 05:22:12.583075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.182 [2024-12-09 05:22:12.583139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.182 [2024-12-09 05:22:12.583164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:30.182 [2024-12-09 05:22:12.586948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.182 [2024-12-09 05:22:12.586977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.182 [2024-12-09 05:22:12.586984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:30.183 [2024-12-09 05:22:12.590653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.183 [2024-12-09 05:22:12.590682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.183 [2024-12-09 05:22:12.590690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:30.183 [2024-12-09 05:22:12.594370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.183 [2024-12-09 05:22:12.594396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.183 [2024-12-09 05:22:12.594404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:30.183 [2024-12-09 05:22:12.598080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.183 [2024-12-09 05:22:12.598147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.183 [2024-12-09 05:22:12.598155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:30.183 [2024-12-09 05:22:12.602426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.183 [2024-12-09 05:22:12.602465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.183 [2024-12-09 05:22:12.602475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:30.183 [2024-12-09 05:22:12.606262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.183 [2024-12-09 05:22:12.606295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.183 [2024-12-09 05:22:12.606302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:30.183 [2024-12-09 05:22:12.610002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.183 [2024-12-09 05:22:12.610033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.183 [2024-12-09 05:22:12.610040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:30.183 [2024-12-09 05:22:12.613869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.183 [2024-12-09 05:22:12.613900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.183 [2024-12-09 05:22:12.613907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:30.183 [2024-12-09 05:22:12.617748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.183 [2024-12-09 05:22:12.617778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.183 [2024-12-09 05:22:12.617785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:30.183 [2024-12-09 05:22:12.621374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.183 [2024-12-09 05:22:12.621403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.183 [2024-12-09 05:22:12.621410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:30.183 [2024-12-09 05:22:12.625145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.183 [2024-12-09 05:22:12.625212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.183 [2024-12-09 05:22:12.625221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:30.183 [2024-12-09 05:22:12.628860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.183 [2024-12-09 05:22:12.628888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.183 [2024-12-09 05:22:12.628895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:30.183 [2024-12-09 05:22:12.632513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.183 [2024-12-09 05:22:12.632545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.183 [2024-12-09 05:22:12.632553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:30.445 [2024-12-09 05:22:12.636249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.445 [2024-12-09 05:22:12.636334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.445 [2024-12-09 05:22:12.636343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:30.445 [2024-12-09 05:22:12.639921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.445 [2024-12-09 05:22:12.639954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.445 [2024-12-09 05:22:12.639961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:30.445 [2024-12-09 05:22:12.643685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.445 [2024-12-09 05:22:12.643716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.445 [2024-12-09 05:22:12.643724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:30.445 [2024-12-09 05:22:12.647346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.445 [2024-12-09 05:22:12.647374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.445 [2024-12-09 05:22:12.647381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:30.445 [2024-12-09 05:22:12.651019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.445 [2024-12-09 05:22:12.651094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.445 [2024-12-09 05:22:12.651103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:30.445 [2024-12-09 05:22:12.654745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.445 [2024-12-09 05:22:12.654773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.445 [2024-12-09 05:22:12.654781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:30.445 [2024-12-09 05:22:12.658510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.445 [2024-12-09 05:22:12.658537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.445 [2024-12-09 05:22:12.658544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:30.445 [2024-12-09 05:22:12.662169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.445 [2024-12-09 05:22:12.662251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.445 [2024-12-09 05:22:12.662259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:30.445 [2024-12-09 05:22:12.665932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.445 [2024-12-09 05:22:12.665964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.445 [2024-12-09 05:22:12.665971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:30.445 [2024-12-09 05:22:12.669639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.445 [2024-12-09 05:22:12.669670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.445 [2024-12-09 05:22:12.669677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:30.445 [2024-12-09 05:22:12.673261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.445 [2024-12-09 05:22:12.673353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.445 [2024-12-09 05:22:12.673362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:30.445 [2024-12-09 05:22:12.676915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.445 [2024-12-09 05:22:12.677007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.445 [2024-12-09 05:22:12.677015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:30.445 [2024-12-09 05:22:12.680700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.445 [2024-12-09 05:22:12.680731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.445 [2024-12-09 05:22:12.680739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:30.445 [2024-12-09 05:22:12.684307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.445 [2024-12-09 05:22:12.684383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.445 [2024-12-09 05:22:12.684392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:30.445 [2024-12-09 05:22:12.688118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.445 [2024-12-09 05:22:12.688186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.445 [2024-12-09 05:22:12.688196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:30.445 [2024-12-09 05:22:12.691825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.445 [2024-12-09 05:22:12.691855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.445 [2024-12-09 05:22:12.691862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:30.445 [2024-12-09 05:22:12.695440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.445 [2024-12-09 05:22:12.695469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.445 [2024-12-09 05:22:12.695477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:30.445 [2024-12-09 05:22:12.699581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.445 [2024-12-09 05:22:12.699616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.445 [2024-12-09 05:22:12.699623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:30.445 [2024-12-09 05:22:12.703117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.445 [2024-12-09 05:22:12.703179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.445 [2024-12-09 05:22:12.703211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:30.445 [2024-12-09 05:22:12.706917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.445 [2024-12-09 05:22:12.706947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.445 [2024-12-09 05:22:12.706954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:30.445 [2024-12-09 05:22:12.710651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.445 [2024-12-09 05:22:12.710679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.445 [2024-12-09 05:22:12.710686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:30.445 [2024-12-09 05:22:12.714416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.445 [2024-12-09 05:22:12.714443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.445 [2024-12-09 05:22:12.714451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:30.445 [2024-12-09 05:22:12.718050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.445 [2024-12-09 05:22:12.718142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.445 [2024-12-09 05:22:12.718151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:30.445 [2024-12-09 05:22:12.721861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.445 [2024-12-09 05:22:12.721890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.445 [2024-12-09 05:22:12.721898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:30.445 [2024-12-09 05:22:12.725501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.445 [2024-12-09 05:22:12.725533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.445 [2024-12-09 05:22:12.725539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:30.446 [2024-12-09 05:22:12.729333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.446 [2024-12-09 05:22:12.729428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.446 [2024-12-09 05:22:12.729440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:30.446 [2024-12-09 05:22:12.733189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.446 [2024-12-09 05:22:12.733282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.446 [2024-12-09 05:22:12.733292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:30.446 [2024-12-09 05:22:12.737032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.446 [2024-12-09 05:22:12.737097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.446 [2024-12-09 05:22:12.737105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:30.446 [2024-12-09 05:22:12.740938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.446 [2024-12-09 05:22:12.740972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.446 [2024-12-09 05:22:12.740981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:30.446 [2024-12-09 05:22:12.744637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.446 [2024-12-09 05:22:12.744667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.446 [2024-12-09 05:22:12.744674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:30.446 [2024-12-09 05:22:12.748271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.446 [2024-12-09 05:22:12.748352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.446 [2024-12-09 05:22:12.748361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:30.446 [2024-12-09 05:22:12.752439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.446 [2024-12-09 05:22:12.752477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.446 [2024-12-09 05:22:12.752486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:30.446 [2024-12-09 05:22:12.756381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.446 [2024-12-09 05:22:12.756425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.446 [2024-12-09 05:22:12.756432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:30.446 [2024-12-09 05:22:12.760158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.446 [2024-12-09 05:22:12.760230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.446 [2024-12-09 05:22:12.760240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:30.446 [2024-12-09 05:22:12.763956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.446 [2024-12-09 05:22:12.763988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.446 [2024-12-09 05:22:12.763996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:30.446 [2024-12-09 05:22:12.767916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.446 [2024-12-09 05:22:12.767949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.446 [2024-12-09 05:22:12.767956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:30.446 [2024-12-09 05:22:12.771663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.446 [2024-12-09 05:22:12.771693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.446 [2024-12-09 05:22:12.771701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:30.446 [2024-12-09 05:22:12.775354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.446 [2024-12-09 05:22:12.775384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.446 [2024-12-09 05:22:12.775391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:30.446 [2024-12-09 05:22:12.779041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.446 [2024-12-09 05:22:12.779107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.446 [2024-12-09 05:22:12.779115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:30.446 [2024-12-09 05:22:12.782864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.446 [2024-12-09 05:22:12.782893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.446 [2024-12-09 05:22:12.782901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:30.446 [2024-12-09 05:22:12.786511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.446 [2024-12-09 05:22:12.786540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.446 [2024-12-09 05:22:12.786547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:30.446 [2024-12-09 05:22:12.790259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.446 [2024-12-09 05:22:12.790351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.446 [2024-12-09 05:22:12.790362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:30.446 [2024-12-09 05:22:12.794163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.446 [2024-12-09 05:22:12.794228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.446 [2024-12-09 05:22:12.794236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:30.446 [2024-12-09 05:22:12.797910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.446 [2024-12-09 05:22:12.797941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.446 [2024-12-09 05:22:12.797948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:30.446 [2024-12-09 05:22:12.802056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.446 [2024-12-09 05:22:12.802089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.446 [2024-12-09 05:22:12.802096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:30.446 [2024-12-09 05:22:12.805750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.446 [2024-12-09 05:22:12.805786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.446 [2024-12-09 05:22:12.805793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:30.446 [2024-12-09 05:22:12.809441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.446 [2024-12-09 05:22:12.809472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.446 [2024-12-09 05:22:12.809479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:30.446 [2024-12-09 05:22:12.813274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.446 [2024-12-09 05:22:12.813383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.446 [2024-12-09 05:22:12.813394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:30.446 [2024-12-09 05:22:12.817089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.446 [2024-12-09 05:22:12.817156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.446 [2024-12-09 05:22:12.817164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:30.446 [2024-12-09 05:22:12.820984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.446 [2024-12-09 05:22:12.821016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.446 [2024-12-09 05:22:12.821023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:30.446 [2024-12-09 05:22:12.824819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.446 [2024-12-09 05:22:12.824851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.446 [2024-12-09 05:22:12.824858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:30.446 [2024-12-09 05:22:12.828494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.446 [2024-12-09 05:22:12.828524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.446 [2024-12-09 05:22:12.828532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:30.447 [2024-12-09 05:22:12.832216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.447 [2024-12-09 05:22:12.832300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.447 [2024-12-09 05:22:12.832308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:30.447 [2024-12-09 05:22:12.836098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.447 [2024-12-09 05:22:12.836132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.447 [2024-12-09 05:22:12.836140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:30.447 [2024-12-09 05:22:12.839982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.447 [2024-12-09 05:22:12.840015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.447 [2024-12-09 05:22:12.840023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:30.447 [2024-12-09 05:22:12.843745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.447 [2024-12-09 05:22:12.843780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.447 [2024-12-09 05:22:12.843787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:30.447 [2024-12-09 05:22:12.847450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.447 [2024-12-09 05:22:12.847480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.447 [2024-12-09 05:22:12.847487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:30.447 [2024-12-09 05:22:12.851216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.447 [2024-12-09 05:22:12.851309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.447 [2024-12-09 05:22:12.851320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:30.447 [2024-12-09 05:22:12.855497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.447 [2024-12-09 05:22:12.855531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.447 [2024-12-09 05:22:12.855539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:30.447 [2024-12-09 05:22:12.859328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.447 [2024-12-09 05:22:12.859422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.447 [2024-12-09 05:22:12.859433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:30.447 [2024-12-09 05:22:12.863381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.447 [2024-12-09 05:22:12.863412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.447 [2024-12-09 05:22:12.863420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:30.447 [2024-12-09 05:22:12.867050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.447 [2024-12-09 05:22:12.867122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.447 [2024-12-09 05:22:12.867131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:30.447 [2024-12-09 05:22:12.871007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.447 [2024-12-09 05:22:12.871037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.447 [2024-12-09 05:22:12.871044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:30.447 [2024-12-09 05:22:12.874659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.447 [2024-12-09 05:22:12.874689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.447 [2024-12-09 05:22:12.874696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:30.447 [2024-12-09 05:22:12.878533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.447 [2024-12-09 05:22:12.878562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.447 [2024-12-09 05:22:12.878569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:30.447 [2024-12-09 05:22:12.882294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.447 [2024-12-09 05:22:12.882378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.447 [2024-12-09 05:22:12.882387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:30.447 [2024-12-09 05:22:12.886194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.447 [2024-12-09 05:22:12.886278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.447 [2024-12-09 05:22:12.886290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:30.447 [2024-12-09 05:22:12.890001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.447 [2024-12-09 05:22:12.890077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.447 [2024-12-09 05:22:12.890087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:30.447 [2024-12-09 05:22:12.893778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.447 [2024-12-09 05:22:12.893809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.447 [2024-12-09 05:22:12.893816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:30.708 [2024-12-09 05:22:12.897412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.708 [2024-12-09 05:22:12.897442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.708 [2024-12-09 05:22:12.897449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:30.708 [2024-12-09 05:22:12.901566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.708 [2024-12-09 05:22:12.901603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.708 [2024-12-09 05:22:12.901612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:30.708 [2024-12-09 05:22:12.905274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.708 [2024-12-09 05:22:12.905382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.708 [2024-12-09 05:22:12.905391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:30.708 [2024-12-09 05:22:12.908983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.708 [2024-12-09 05:22:12.909059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.708 [2024-12-09 05:22:12.909068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:30.708 [2024-12-09 05:22:12.912746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.708 [2024-12-09 05:22:12.912778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.708 [2024-12-09 05:22:12.912785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:30.708 [2024-12-09 05:22:12.916360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.708 [2024-12-09 05:22:12.916389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.708 [2024-12-09 05:22:12.916397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:30.708 [2024-12-09 05:22:12.920055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.708 [2024-12-09 05:22:12.920122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.708 [2024-12-09 05:22:12.920130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:30.708 [2024-12-09 05:22:12.923824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.708 [2024-12-09 05:22:12.923857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.708 [2024-12-09 05:22:12.923864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:30.708 [2024-12-09 05:22:12.927706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.708 [2024-12-09 05:22:12.927738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.708 [2024-12-09 05:22:12.927746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:30.708 [2024-12-09 05:22:12.931469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.708 [2024-12-09 05:22:12.931501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.708 [2024-12-09 05:22:12.931508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:30.708 [2024-12-09 05:22:12.935316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.708 [2024-12-09 05:22:12.935353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.708 [2024-12-09 05:22:12.935361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:30.708 [2024-12-09 05:22:12.939129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.708 [2024-12-09 05:22:12.939225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.708 [2024-12-09 05:22:12.939235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:30.708 [2024-12-09 05:22:12.943063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.708 [2024-12-09 05:22:12.943096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.708 [2024-12-09 05:22:12.943104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:30.708 [2024-12-09 05:22:12.947062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.708 [2024-12-09 05:22:12.947093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.709 [2024-12-09 05:22:12.947101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:30.709 [2024-12-09 05:22:12.950918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.709 [2024-12-09 05:22:12.950950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.709 [2024-12-09 05:22:12.950957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:30.709 [2024-12-09 05:22:12.954712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.709 [2024-12-09 05:22:12.954743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.709 [2024-12-09 05:22:12.954751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:30.709 [2024-12-09 05:22:12.958444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.709 [2024-12-09 05:22:12.958472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.709 [2024-12-09 05:22:12.958479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:30.709 [2024-12-09 05:22:12.962183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.709 [2024-12-09 05:22:12.962267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.709 [2024-12-09 05:22:12.962275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:30.709 [2024-12-09 05:22:12.966032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.709 [2024-12-09 05:22:12.966064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.709 [2024-12-09 05:22:12.966071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:30.709 [2024-12-09 05:22:12.970184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.709 [2024-12-09 05:22:12.970214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.709 [2024-12-09 05:22:12.970221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:30.709 [2024-12-09 05:22:12.973902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.709 [2024-12-09 05:22:12.973932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.709 [2024-12-09 05:22:12.973939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:30.709 [2024-12-09 05:22:12.977730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.709 [2024-12-09 05:22:12.977761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.709 [2024-12-09 05:22:12.977768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:30.709 [2024-12-09 05:22:12.981423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.709 [2024-12-09 05:22:12.981453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.709 [2024-12-09 05:22:12.981461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:30.709 [2024-12-09 05:22:12.985018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.709 [2024-12-09 05:22:12.985091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.709 [2024-12-09 05:22:12.985100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:30.709 [2024-12-09 05:22:12.988880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.709 [2024-12-09 05:22:12.988911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.709 [2024-12-09 05:22:12.988918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:30.709 [2024-12-09 05:22:12.992589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.709 [2024-12-09 05:22:12.992620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.709 [2024-12-09 05:22:12.992628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:30.709 [2024-12-09 05:22:12.996263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.709 [2024-12-09 05:22:12.996359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.709 [2024-12-09 05:22:12.996368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:30.709 [2024-12-09 05:22:13.000039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.709 [2024-12-09 05:22:13.000103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.709 [2024-12-09 05:22:13.000111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:30.709 [2024-12-09 05:22:13.003888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.709 [2024-12-09 05:22:13.003921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.709 [2024-12-09 05:22:13.003929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:30.709 [2024-12-09 05:22:13.007624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.709 [2024-12-09 05:22:13.007655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.709 [2024-12-09 05:22:13.007662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:30.709 [2024-12-09 05:22:13.011266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.709 [2024-12-09 05:22:13.011338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.709 [2024-12-09 05:22:13.011347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:30.709 [2024-12-09 05:22:13.015022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.709 [2024-12-09 05:22:13.015080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.709 [2024-12-09 05:22:13.015105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:30.709 [2024-12-09 05:22:13.018733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.709 [2024-12-09 05:22:13.018762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.709 [2024-12-09 05:22:13.018769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:30.709 [2024-12-09 05:22:13.022399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.709 [2024-12-09 05:22:13.022427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.709 [2024-12-09 05:22:13.022434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:30.709 [2024-12-09 05:22:13.026744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.709 [2024-12-09 05:22:13.026784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.709 [2024-12-09 05:22:13.026794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:30.709 [2024-12-09 05:22:13.030537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.709 [2024-12-09 05:22:13.030567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.709 [2024-12-09 05:22:13.030575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:30.709 [2024-12-09 05:22:13.034217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.709 [2024-12-09 05:22:13.034294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.709 [2024-12-09 05:22:13.034319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:30.709 [2024-12-09 05:22:13.038116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.709 [2024-12-09 05:22:13.038193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.709 [2024-12-09 05:22:13.038202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:30.709 [2024-12-09 05:22:13.041892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.709 [2024-12-09 05:22:13.041923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.709 [2024-12-09 05:22:13.041930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:30.709 [2024-12-09 05:22:13.045659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.709 [2024-12-09 05:22:13.045690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.709 [2024-12-09 05:22:13.045697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:30.709 [2024-12-09 05:22:13.049446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.710 [2024-12-09 05:22:13.049477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.710 [2024-12-09 05:22:13.049484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:30.710 8137.50 IOPS, 1017.19 MiB/s [2024-12-09T05:22:13.166Z] [2024-12-09 05:22:13.054257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d9b0) 01:27:30.710 [2024-12-09 05:22:13.054284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:30.710 [2024-12-09 05:22:13.054292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:30.710 01:27:30.710 Latency(us) 01:27:30.710 [2024-12-09T05:22:13.166Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:27:30.710 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 01:27:30.710 nvme0n1 : 2.00 8135.81 1016.98 0.00 0.00 1963.63 1724.26 11790.76 01:27:30.710 [2024-12-09T05:22:13.166Z] =================================================================================================================== 01:27:30.710 [2024-12-09T05:22:13.166Z] Total : 8135.81 1016.98 0.00 0.00 1963.63 1724.26 11790.76 01:27:30.710 { 01:27:30.710 "results": [ 01:27:30.710 { 01:27:30.710 "job": "nvme0n1", 01:27:30.710 "core_mask": "0x2", 01:27:30.710 "workload": "randread", 01:27:30.710 "status": "finished", 01:27:30.710 "queue_depth": 16, 01:27:30.710 "io_size": 131072, 01:27:30.710 "runtime": 2.002382, 01:27:30.710 "iops": 8135.810249992259, 01:27:30.710 "mibps": 1016.9762812490324, 01:27:30.710 "io_failed": 0, 01:27:30.710 "io_timeout": 0, 01:27:30.710 "avg_latency_us": 1963.6255438277467, 01:27:30.710 "min_latency_us": 1724.2550218340612, 01:27:30.710 "max_latency_us": 11790.756331877728 01:27:30.710 } 01:27:30.710 ], 01:27:30.710 "core_count": 1 01:27:30.710 } 01:27:30.710 05:22:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 01:27:30.710 05:22:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 01:27:30.710 05:22:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 01:27:30.710 05:22:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 01:27:30.710 | .driver_specific 01:27:30.710 | .nvme_error 01:27:30.710 | .status_code 01:27:30.710 | .command_transient_transport_error' 01:27:30.969 05:22:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 526 > 0 )) 01:27:30.969 05:22:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80244 01:27:30.969 05:22:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80244 ']' 01:27:30.969 05:22:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80244 01:27:30.969 05:22:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 01:27:30.969 05:22:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:27:30.969 05:22:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80244 01:27:30.969 killing process with pid 80244 01:27:30.969 Received shutdown signal, test time was about 2.000000 seconds 01:27:30.969 01:27:30.969 Latency(us) 01:27:30.969 [2024-12-09T05:22:13.425Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:27:30.969 [2024-12-09T05:22:13.425Z] =================================================================================================================== 01:27:30.969 [2024-12-09T05:22:13.425Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:27:30.969 05:22:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:27:30.970 05:22:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:27:30.970 05:22:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80244' 01:27:30.970 05:22:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80244 01:27:30.970 05:22:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80244 01:27:31.229 05:22:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 01:27:31.229 05:22:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 01:27:31.229 05:22:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 01:27:31.229 05:22:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 01:27:31.229 05:22:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 01:27:31.229 05:22:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80299 01:27:31.229 05:22:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 01:27:31.229 05:22:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80299 /var/tmp/bperf.sock 01:27:31.229 05:22:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80299 ']' 01:27:31.229 05:22:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 01:27:31.229 05:22:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 01:27:31.229 05:22:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:27:31.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:27:31.229 05:22:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 01:27:31.229 05:22:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:27:31.229 [2024-12-09 05:22:13.602859] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:27:31.229 [2024-12-09 05:22:13.603040] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80299 ] 01:27:31.488 [2024-12-09 05:22:13.754560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:27:31.488 [2024-12-09 05:22:13.809114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:27:31.488 [2024-12-09 05:22:13.850390] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:27:32.057 05:22:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:27:32.057 05:22:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 01:27:32.057 05:22:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 01:27:32.057 05:22:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 01:27:32.316 05:22:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 01:27:32.316 05:22:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:32.316 05:22:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:27:32.316 05:22:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:32.316 05:22:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:27:32.316 05:22:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:27:32.582 nvme0n1 01:27:32.582 05:22:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 01:27:32.582 05:22:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:32.582 05:22:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:27:32.582 05:22:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:32.582 05:22:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 01:27:32.582 05:22:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:27:32.857 Running I/O for 2 seconds... 01:27:32.857 [2024-12-09 05:22:15.058100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016efb048 01:27:32.857 [2024-12-09 05:22:15.059384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:7697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:32.857 [2024-12-09 05:22:15.059467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:32.857 [2024-12-09 05:22:15.071058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016efb8b8 01:27:32.857 [2024-12-09 05:22:15.072208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:32.857 [2024-12-09 05:22:15.072239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:32.857 [2024-12-09 05:22:15.083625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016efc128 01:27:32.857 [2024-12-09 05:22:15.084781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:16657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:32.857 [2024-12-09 05:22:15.084809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:27:32.858 [2024-12-09 05:22:15.095903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016efc998 01:27:32.858 [2024-12-09 05:22:15.096982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:32.858 [2024-12-09 05:22:15.097010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:27:32.858 [2024-12-09 05:22:15.108201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016efd208 01:27:32.858 [2024-12-09 05:22:15.109286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:32.858 [2024-12-09 05:22:15.109314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:27:32.858 [2024-12-09 05:22:15.120621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016efda78 01:27:32.858 [2024-12-09 05:22:15.121674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:32.858 [2024-12-09 05:22:15.121702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:27:32.858 [2024-12-09 05:22:15.132923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016efe2e8 01:27:32.858 [2024-12-09 05:22:15.134032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:32.858 [2024-12-09 05:22:15.134060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:27:32.858 [2024-12-09 05:22:15.145524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016efeb58 01:27:32.858 [2024-12-09 05:22:15.146553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:32.858 [2024-12-09 05:22:15.146579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:27:32.858 [2024-12-09 05:22:15.162888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016efef90 01:27:32.858 [2024-12-09 05:22:15.164927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:32.858 [2024-12-09 05:22:15.164952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:27:32.858 [2024-12-09 05:22:15.175109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016efeb58 01:27:32.858 [2024-12-09 05:22:15.177098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:32.858 [2024-12-09 05:22:15.177162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:27:32.858 [2024-12-09 05:22:15.187491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016efe2e8 01:27:32.858 [2024-12-09 05:22:15.189411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:32.858 [2024-12-09 05:22:15.189436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:27:32.858 [2024-12-09 05:22:15.199569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016efda78 01:27:32.858 [2024-12-09 05:22:15.201480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:32.858 [2024-12-09 05:22:15.201505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:27:32.858 [2024-12-09 05:22:15.211675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016efd208 01:27:32.858 [2024-12-09 05:22:15.213572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:7684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:32.858 [2024-12-09 05:22:15.213599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:27:32.858 [2024-12-09 05:22:15.223916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016efc998 01:27:32.858 [2024-12-09 05:22:15.225803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:32.858 [2024-12-09 05:22:15.225828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:27:32.858 [2024-12-09 05:22:15.236958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016efc128 01:27:32.858 [2024-12-09 05:22:15.238968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:32.858 [2024-12-09 05:22:15.238992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:27:32.858 [2024-12-09 05:22:15.250314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016efb8b8 01:27:32.858 [2024-12-09 05:22:15.252496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:32.858 [2024-12-09 05:22:15.252531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:27:32.858 [2024-12-09 05:22:15.264561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016efb048 01:27:32.858 [2024-12-09 05:22:15.266745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:4822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:32.858 [2024-12-09 05:22:15.266777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:27:32.858 [2024-12-09 05:22:15.278673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016efa7d8 01:27:32.858 [2024-12-09 05:22:15.280929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:13566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:32.858 [2024-12-09 05:22:15.281010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:27:32.858 [2024-12-09 05:22:15.292176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ef9f68 01:27:32.858 [2024-12-09 05:22:15.294173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:9881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:32.858 [2024-12-09 05:22:15.294204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:27:32.858 [2024-12-09 05:22:15.305251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ef96f8 01:27:32.858 [2024-12-09 05:22:15.307247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:12239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:32.858 [2024-12-09 05:22:15.307275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:27:33.121 [2024-12-09 05:22:15.318273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ef8e88 01:27:33.121 [2024-12-09 05:22:15.320414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:13375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:33.121 [2024-12-09 05:22:15.320456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:27:33.121 [2024-12-09 05:22:15.332194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ef8618 01:27:33.121 [2024-12-09 05:22:15.334073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:33.121 [2024-12-09 05:22:15.334102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:27:33.121 [2024-12-09 05:22:15.345210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ef7da8 01:27:33.121 [2024-12-09 05:22:15.347094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:23558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:33.121 [2024-12-09 05:22:15.347118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:27:33.121 [2024-12-09 05:22:15.359184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ef7538 01:27:33.121 [2024-12-09 05:22:15.361213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:24081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:33.121 [2024-12-09 05:22:15.361290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:27:33.121 [2024-12-09 05:22:15.372500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ef6cc8 01:27:33.121 [2024-12-09 05:22:15.374309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:9525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:33.121 [2024-12-09 05:22:15.374353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:27:33.121 [2024-12-09 05:22:15.386309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ef6458 01:27:33.121 [2024-12-09 05:22:15.388208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:2743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:33.121 [2024-12-09 05:22:15.388290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:27:33.121 [2024-12-09 05:22:15.399287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ef5be8 01:27:33.121 [2024-12-09 05:22:15.401110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:16325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:33.121 [2024-12-09 05:22:15.401143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:27:33.121 [2024-12-09 05:22:15.412273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ef5378 01:27:33.121 [2024-12-09 05:22:15.414063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:19124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:33.121 [2024-12-09 05:22:15.414096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:27:33.121 [2024-12-09 05:22:15.425873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ef4b08 01:27:33.121 [2024-12-09 05:22:15.427822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:4380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:33.121 [2024-12-09 05:22:15.427856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:27:33.121 [2024-12-09 05:22:15.439008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ef4298 01:27:33.121 [2024-12-09 05:22:15.440954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:33.121 [2024-12-09 05:22:15.441025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:27:33.121 [2024-12-09 05:22:15.452584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ef3a28 01:27:33.121 [2024-12-09 05:22:15.454302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:6066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:33.121 [2024-12-09 05:22:15.454350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:27:33.121 [2024-12-09 05:22:15.466061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ef31b8 01:27:33.121 [2024-12-09 05:22:15.467934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:6371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:33.121 [2024-12-09 05:22:15.467965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:27:33.121 [2024-12-09 05:22:15.479034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ef2948 01:27:33.121 [2024-12-09 05:22:15.480722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:33.122 [2024-12-09 05:22:15.480753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:27:33.122 [2024-12-09 05:22:15.492591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ef20d8 01:27:33.122 [2024-12-09 05:22:15.494418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:11588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:33.122 [2024-12-09 05:22:15.494470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:27:33.122 [2024-12-09 05:22:15.505853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ef1868 01:27:33.122 [2024-12-09 05:22:15.507535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:15133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:33.122 [2024-12-09 05:22:15.507639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:27:33.122 [2024-12-09 05:22:15.519278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ef0ff8 01:27:33.122 [2024-12-09 05:22:15.520921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:33.122 [2024-12-09 05:22:15.520953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:27:33.122 [2024-12-09 05:22:15.532767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ef0788 01:27:33.122 [2024-12-09 05:22:15.534450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:8583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:33.122 [2024-12-09 05:22:15.534499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:27:33.122 [2024-12-09 05:22:15.545616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016eeff18 01:27:33.122 [2024-12-09 05:22:15.547322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:23875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:33.122 [2024-12-09 05:22:15.547363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 01:27:33.122 [2024-12-09 05:22:15.558304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016eef6a8 01:27:33.122 [2024-12-09 05:22:15.559976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:4288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:33.122 [2024-12-09 05:22:15.560010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:27:33.122 [2024-12-09 05:22:15.571951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016eeee38 01:27:33.122 [2024-12-09 05:22:15.573624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:15675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:33.122 [2024-12-09 05:22:15.573658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:27:33.381 [2024-12-09 05:22:15.585038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016eee5c8 01:27:33.381 [2024-12-09 05:22:15.586588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:9478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:33.381 [2024-12-09 05:22:15.586619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:27:33.381 [2024-12-09 05:22:15.597960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016eedd58 01:27:33.381 [2024-12-09 05:22:15.599503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:20336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:33.381 [2024-12-09 05:22:15.599535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:27:33.381 [2024-12-09 05:22:15.611312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016eed4e8 01:27:33.381 [2024-12-09 05:22:15.612933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:11467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:33.381 [2024-12-09 05:22:15.612968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:27:33.381 [2024-12-09 05:22:15.624384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016eecc78 01:27:33.381 [2024-12-09 05:22:15.625907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:4710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:33.381 [2024-12-09 05:22:15.625939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:27:33.381 [2024-12-09 05:22:15.636783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016eec408 01:27:33.382 [2024-12-09 05:22:15.638410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:19820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:33.382 [2024-12-09 05:22:15.638444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:27:33.382 [2024-12-09 05:22:15.650093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016eebb98 01:27:33.382 [2024-12-09 05:22:15.651658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:5756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:33.382 [2024-12-09 05:22:15.651737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:27:33.382 [2024-12-09 05:22:15.663177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016eeb328 01:27:33.382 [2024-12-09 05:22:15.664766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:19163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:33.382 [2024-12-09 05:22:15.664800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:27:33.382 [2024-12-09 05:22:15.676158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016eeaab8 01:27:33.382 [2024-12-09 05:22:15.677674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:18211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:33.382 [2024-12-09 05:22:15.677741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:27:33.382 [2024-12-09 05:22:15.689809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016eea248 01:27:33.382 [2024-12-09 05:22:15.691257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:33.382 [2024-12-09 05:22:15.691380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:27:33.382 [2024-12-09 05:22:15.702953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ee99d8 01:27:33.382 [2024-12-09 05:22:15.704434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:9992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:33.382 [2024-12-09 05:22:15.704538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:001f p:0 m:0 dnr:0 01:27:33.382 [2024-12-09 05:22:15.716127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ee9168 01:27:33.382 [2024-12-09 05:22:15.717976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:15356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:33.382 [2024-12-09 05:22:15.718081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:27:33.382 [2024-12-09 05:22:15.729486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ee88f8 01:27:33.382 [2024-12-09 05:22:15.730982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:33.382 [2024-12-09 05:22:15.731067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:27:33.382 [2024-12-09 05:22:15.742374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ee8088 01:27:33.382 [2024-12-09 05:22:15.743959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:33.382 [2024-12-09 05:22:15.744057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:27:33.382 [2024-12-09 05:22:15.756193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ee7818 01:27:33.382 [2024-12-09 05:22:15.757733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:20902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:33.382 [2024-12-09 05:22:15.757812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:27:33.382 [2024-12-09 05:22:15.769416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ee6fa8 01:27:33.382 [2024-12-09 05:22:15.770802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:10414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:33.382 [2024-12-09 05:22:15.770879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 01:27:33.382 [2024-12-09 05:22:15.781827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ee6738 01:27:33.382 [2024-12-09 05:22:15.783142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:21926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:33.382 [2024-12-09 05:22:15.783227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:27:33.382 [2024-12-09 05:22:15.795541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ee5ec8 01:27:33.382 [2024-12-09 05:22:15.796997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:24598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:33.382 [2024-12-09 05:22:15.797076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 01:27:33.382 [2024-12-09 05:22:15.808667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ee5658 01:27:33.382 [2024-12-09 05:22:15.810077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:12503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:33.382 [2024-12-09 05:22:15.810167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:000f p:0 m:0 dnr:0 01:27:33.382 [2024-12-09 05:22:15.821249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ee4de8 01:27:33.382 [2024-12-09 05:22:15.822516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:20188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:33.382 [2024-12-09 05:22:15.822596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:27:33.382 [2024-12-09 05:22:15.834690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ee4578 01:27:33.642 [2024-12-09 05:22:15.836040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:8689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:33.642 [2024-12-09 05:22:15.836130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:000b p:0 m:0 dnr:0 01:27:33.642 [2024-12-09 05:22:15.847497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ee3d08 01:27:33.642 [2024-12-09 05:22:15.848800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:33.642 [2024-12-09 05:22:15.848890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:27:33.642 [2024-12-09 05:22:15.860557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ee3498 01:27:33.642 [2024-12-09 05:22:15.861945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:4877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:33.642 [2024-12-09 05:22:15.862034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 01:27:33.642 [2024-12-09 05:22:15.874127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ee2c28 01:27:33.642 [2024-12-09 05:22:15.875374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:13286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:33.642 [2024-12-09 05:22:15.875464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 01:27:33.642 [2024-12-09 05:22:15.887058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ee23b8 01:27:33.642 [2024-12-09 05:22:15.888516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:33.642 [2024-12-09 05:22:15.888617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 01:27:33.642 [2024-12-09 05:22:15.900185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ee1b48 01:27:33.642 [2024-12-09 05:22:15.901373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:33.642 [2024-12-09 05:22:15.901452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:27:33.642 [2024-12-09 05:22:15.913437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ee12d8 01:27:33.642 [2024-12-09 05:22:15.914725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:11433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:33.642 [2024-12-09 05:22:15.914814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:27:33.642 [2024-12-09 05:22:15.927001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ee0a68 01:27:33.642 [2024-12-09 05:22:15.928389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:33.642 [2024-12-09 05:22:15.928488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:27:33.642 [2024-12-09 05:22:15.940797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ee01f8 01:27:33.642 [2024-12-09 05:22:15.941999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:33.642 [2024-12-09 05:22:15.942086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:27:33.642 [2024-12-09 05:22:15.954767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016edf988 01:27:33.642 [2024-12-09 05:22:15.956090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:33.642 [2024-12-09 05:22:15.956198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:27:33.642 [2024-12-09 05:22:15.968242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016edf118 01:27:33.642 [2024-12-09 05:22:15.969416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:33.643 [2024-12-09 05:22:15.969507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 01:27:33.643 [2024-12-09 05:22:15.981367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ede8a8 01:27:33.643 [2024-12-09 05:22:15.982544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:33.643 [2024-12-09 05:22:15.982642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:27:33.643 [2024-12-09 05:22:15.995790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ede038 01:27:33.643 [2024-12-09 05:22:15.996926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:33.643 [2024-12-09 05:22:15.997017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:27:33.643 [2024-12-09 05:22:16.014227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ede038 01:27:33.643 [2024-12-09 05:22:16.016762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:33.643 [2024-12-09 05:22:16.016852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:27:33.643 [2024-12-09 05:22:16.028113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ede8a8 01:27:33.643 [2024-12-09 05:22:16.030240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:14667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:33.643 [2024-12-09 05:22:16.030272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:27:33.643 [2024-12-09 05:22:16.042784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016edf118 01:27:33.643 [2024-12-09 05:22:16.045736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:33.643 [2024-12-09 05:22:16.045772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:27:33.643 19104.00 IOPS, 74.62 MiB/s [2024-12-09T05:22:16.099Z] [2024-12-09 05:22:16.057826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016edf988 01:27:33.643 [2024-12-09 05:22:16.060096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:19894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:33.643 [2024-12-09 05:22:16.060138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:27:33.643 [2024-12-09 05:22:16.071605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ee01f8 01:27:33.643 [2024-12-09 05:22:16.073675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:33.643 [2024-12-09 05:22:16.073782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:27:33.643 [2024-12-09 05:22:16.085618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ee0a68 01:27:33.643 [2024-12-09 05:22:16.087691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:8388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:33.643 [2024-12-09 05:22:16.087731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:27:33.903 [2024-12-09 05:22:16.099051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ee12d8 01:27:33.903 [2024-12-09 05:22:16.101044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:23605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:33.903 [2024-12-09 05:22:16.101124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 01:27:33.903 [2024-12-09 05:22:16.112374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ee1b48 01:27:33.903 [2024-12-09 05:22:16.114425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:23638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:33.903 [2024-12-09 05:22:16.114455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:33.903 [2024-12-09 05:22:16.126048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ee23b8 01:27:33.903 [2024-12-09 05:22:16.128041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:1416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:33.903 [2024-12-09 05:22:16.128072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:27:33.903 [2024-12-09 05:22:16.139213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ee2c28 01:27:33.903 [2024-12-09 05:22:16.141307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:21641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:33.903 [2024-12-09 05:22:16.141346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:27:33.903 [2024-12-09 05:22:16.153075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ee3498 01:27:33.903 [2024-12-09 05:22:16.155317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:33.903 [2024-12-09 05:22:16.155360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:27:33.903 [2024-12-09 05:22:16.166851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ee3d08 01:27:33.903 [2024-12-09 05:22:16.168888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:5868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:33.903 [2024-12-09 05:22:16.168922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:27:33.903 [2024-12-09 05:22:16.180600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ee4578 01:27:33.903 [2024-12-09 05:22:16.182944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:1101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:33.903 [2024-12-09 05:22:16.183028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:27:33.903 [2024-12-09 05:22:16.194998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ee4de8 01:27:33.903 [2024-12-09 05:22:16.197057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:2220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:33.903 [2024-12-09 05:22:16.197089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:27:33.903 [2024-12-09 05:22:16.208625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ee5658 01:27:33.903 [2024-12-09 05:22:16.210651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:6052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:33.903 [2024-12-09 05:22:16.210684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:27:33.903 [2024-12-09 05:22:16.222427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ee5ec8 01:27:33.903 [2024-12-09 05:22:16.224718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:12227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:33.903 [2024-12-09 05:22:16.224814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:27:33.903 [2024-12-09 05:22:16.236245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ee6738 01:27:33.903 [2024-12-09 05:22:16.238132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:14711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:33.903 [2024-12-09 05:22:16.238166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:27:33.903 [2024-12-09 05:22:16.249602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ee6fa8 01:27:33.903 [2024-12-09 05:22:16.251386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:33.903 [2024-12-09 05:22:16.251421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:27:33.903 [2024-12-09 05:22:16.263445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ee7818 01:27:33.903 [2024-12-09 05:22:16.265440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:33.903 [2024-12-09 05:22:16.265531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:004c p:0 m:0 dnr:0 01:27:33.903 [2024-12-09 05:22:16.277529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ee8088 01:27:33.903 [2024-12-09 05:22:16.279438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:33.903 [2024-12-09 05:22:16.279477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:27:33.903 [2024-12-09 05:22:16.291924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ee88f8 01:27:33.903 [2024-12-09 05:22:16.294205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:25450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:33.903 [2024-12-09 05:22:16.294245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:27:33.903 [2024-12-09 05:22:16.306792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ee9168 01:27:33.903 [2024-12-09 05:22:16.308877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:23041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:33.903 [2024-12-09 05:22:16.308964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:27:33.903 [2024-12-09 05:22:16.320790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ee99d8 01:27:33.903 [2024-12-09 05:22:16.322642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:4392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:33.903 [2024-12-09 05:22:16.322676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:27:33.903 [2024-12-09 05:22:16.334453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016eea248 01:27:33.903 [2024-12-09 05:22:16.336232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:15369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:33.903 [2024-12-09 05:22:16.336356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:33.903 [2024-12-09 05:22:16.347781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016eeaab8 01:27:33.903 [2024-12-09 05:22:16.349430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:6908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:33.903 [2024-12-09 05:22:16.349462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:27:34.163 [2024-12-09 05:22:16.360710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016eeb328 01:27:34.163 [2024-12-09 05:22:16.362346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:6406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:34.163 [2024-12-09 05:22:16.362426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:27:34.163 [2024-12-09 05:22:16.374056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016eebb98 01:27:34.163 [2024-12-09 05:22:16.375816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:10881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:34.163 [2024-12-09 05:22:16.375846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:003c p:0 m:0 dnr:0 01:27:34.163 [2024-12-09 05:22:16.387199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016eec408 01:27:34.163 [2024-12-09 05:22:16.388853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:3260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:34.163 [2024-12-09 05:22:16.388883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:27:34.163 [2024-12-09 05:22:16.399896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016eecc78 01:27:34.163 [2024-12-09 05:22:16.401712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:34.163 [2024-12-09 05:22:16.401744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 01:27:34.163 [2024-12-09 05:22:16.413270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016eed4e8 01:27:34.163 [2024-12-09 05:22:16.415010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:34.164 [2024-12-09 05:22:16.415039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:27:34.164 [2024-12-09 05:22:16.426273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016eedd58 01:27:34.164 [2024-12-09 05:22:16.427865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:34.164 [2024-12-09 05:22:16.427896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:27:34.164 [2024-12-09 05:22:16.438932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016eee5c8 01:27:34.164 [2024-12-09 05:22:16.440873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:34.164 [2024-12-09 05:22:16.440913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:27:34.164 [2024-12-09 05:22:16.452482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016eeee38 01:27:34.164 [2024-12-09 05:22:16.454065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:34.164 [2024-12-09 05:22:16.454096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 01:27:34.164 [2024-12-09 05:22:16.465332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016eef6a8 01:27:34.164 [2024-12-09 05:22:16.466901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:11486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:34.164 [2024-12-09 05:22:16.466929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:27:34.164 [2024-12-09 05:22:16.478377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016eeff18 01:27:34.164 [2024-12-09 05:22:16.480269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:12567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:34.164 [2024-12-09 05:22:16.480305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:27:34.164 [2024-12-09 05:22:16.491575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ef0788 01:27:34.164 [2024-12-09 05:22:16.493226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:17261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:34.164 [2024-12-09 05:22:16.493261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:27:34.164 [2024-12-09 05:22:16.504554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ef0ff8 01:27:34.164 [2024-12-09 05:22:16.506005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:19033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:34.164 [2024-12-09 05:22:16.506034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:27:34.164 [2024-12-09 05:22:16.517481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ef1868 01:27:34.164 [2024-12-09 05:22:16.519283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:11556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:34.164 [2024-12-09 05:22:16.519313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:27:34.164 [2024-12-09 05:22:16.530588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ef20d8 01:27:34.164 [2024-12-09 05:22:16.532088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:8222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:34.164 [2024-12-09 05:22:16.532117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:27:34.164 [2024-12-09 05:22:16.543247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ef2948 01:27:34.164 [2024-12-09 05:22:16.544817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:4688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:34.164 [2024-12-09 05:22:16.544851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:34.164 [2024-12-09 05:22:16.556802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ef31b8 01:27:34.164 [2024-12-09 05:22:16.558200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:34.164 [2024-12-09 05:22:16.558279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 01:27:34.164 [2024-12-09 05:22:16.569513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ef3a28 01:27:34.164 [2024-12-09 05:22:16.570956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:9345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:34.164 [2024-12-09 05:22:16.570986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:27:34.164 [2024-12-09 05:22:16.582226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ef4298 01:27:34.164 [2024-12-09 05:22:16.583851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:34.164 [2024-12-09 05:22:16.583878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:27:34.164 [2024-12-09 05:22:16.596016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ef4b08 01:27:34.164 [2024-12-09 05:22:16.597406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:21941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:34.164 [2024-12-09 05:22:16.597436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:27:34.164 [2024-12-09 05:22:16.608737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ef5378 01:27:34.164 [2024-12-09 05:22:16.610126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:34.164 [2024-12-09 05:22:16.610155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:27:34.424 [2024-12-09 05:22:16.622275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ef5be8 01:27:34.424 [2024-12-09 05:22:16.623829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:8841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:34.424 [2024-12-09 05:22:16.623860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:27:34.424 [2024-12-09 05:22:16.635366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ef6458 01:27:34.424 [2024-12-09 05:22:16.636739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:16360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:34.424 [2024-12-09 05:22:16.636771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 01:27:34.424 [2024-12-09 05:22:16.648038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ef6cc8 01:27:34.424 [2024-12-09 05:22:16.649540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:11111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:34.424 [2024-12-09 05:22:16.649573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 01:27:34.424 [2024-12-09 05:22:16.661626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ef7538 01:27:34.425 [2024-12-09 05:22:16.662822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:20566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:34.425 [2024-12-09 05:22:16.662850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 01:27:34.425 [2024-12-09 05:22:16.674170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ef7da8 01:27:34.425 [2024-12-09 05:22:16.675440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:20668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:34.425 [2024-12-09 05:22:16.675540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:27:34.425 [2024-12-09 05:22:16.687663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ef8618 01:27:34.425 [2024-12-09 05:22:16.689112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:22807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:34.425 [2024-12-09 05:22:16.689143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:000c p:0 m:0 dnr:0 01:27:34.425 [2024-12-09 05:22:16.700738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ef8e88 01:27:34.425 [2024-12-09 05:22:16.701999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:24095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:34.425 [2024-12-09 05:22:16.702039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:000a p:0 m:0 dnr:0 01:27:34.425 [2024-12-09 05:22:16.713720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ef96f8 01:27:34.425 [2024-12-09 05:22:16.714971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:23391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:34.425 [2024-12-09 05:22:16.715003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 01:27:34.425 [2024-12-09 05:22:16.727038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ef9f68 01:27:34.425 [2024-12-09 05:22:16.728250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:15499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:34.425 [2024-12-09 05:22:16.728280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 01:27:34.425 [2024-12-09 05:22:16.739738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016efa7d8 01:27:34.425 [2024-12-09 05:22:16.740974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:18040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:34.425 [2024-12-09 05:22:16.741002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 01:27:34.425 [2024-12-09 05:22:16.752512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016efb048 01:27:34.425 [2024-12-09 05:22:16.753671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:15239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:34.425 [2024-12-09 05:22:16.753700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:34.425 [2024-12-09 05:22:16.765646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016efb8b8 01:27:34.425 [2024-12-09 05:22:16.766929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:9843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:34.425 [2024-12-09 05:22:16.767003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:27:34.425 [2024-12-09 05:22:16.778799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016efc128 01:27:34.425 [2024-12-09 05:22:16.780068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:34.425 [2024-12-09 05:22:16.780203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:27:34.425 [2024-12-09 05:22:16.792285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016efc998 01:27:34.425 [2024-12-09 05:22:16.793574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:25049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:34.425 [2024-12-09 05:22:16.793661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:27:34.425 [2024-12-09 05:22:16.805561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016efd208 01:27:34.425 [2024-12-09 05:22:16.806745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:21048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:34.425 [2024-12-09 05:22:16.806828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:27:34.425 [2024-12-09 05:22:16.818408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016efda78 01:27:34.425 [2024-12-09 05:22:16.819790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:34.425 [2024-12-09 05:22:16.819889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:27:34.425 [2024-12-09 05:22:16.832313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016efe2e8 01:27:34.425 [2024-12-09 05:22:16.833398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:34.425 [2024-12-09 05:22:16.833479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:27:34.425 [2024-12-09 05:22:16.845289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016efeb58 01:27:34.425 [2024-12-09 05:22:16.846377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:34.425 [2024-12-09 05:22:16.846460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:27:34.425 [2024-12-09 05:22:16.863918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016efef90 01:27:34.425 [2024-12-09 05:22:16.866022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:34.425 [2024-12-09 05:22:16.866095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:27:34.425 [2024-12-09 05:22:16.876837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016efeb58 01:27:34.684 [2024-12-09 05:22:16.878911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:34.684 [2024-12-09 05:22:16.878981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:27:34.684 [2024-12-09 05:22:16.889426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016efe2e8 01:27:34.684 [2024-12-09 05:22:16.891399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:6439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:34.684 [2024-12-09 05:22:16.891470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:27:34.684 [2024-12-09 05:22:16.902525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016efda78 01:27:34.684 [2024-12-09 05:22:16.904668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:34.684 [2024-12-09 05:22:16.904755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:27:34.684 [2024-12-09 05:22:16.915625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016efd208 01:27:34.684 [2024-12-09 05:22:16.917818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:11916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:34.684 [2024-12-09 05:22:16.917888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:27:34.684 [2024-12-09 05:22:16.928674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016efc998 01:27:34.685 [2024-12-09 05:22:16.930948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:34.685 [2024-12-09 05:22:16.931015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:27:34.685 [2024-12-09 05:22:16.942070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016efc128 01:27:34.685 [2024-12-09 05:22:16.944198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:24217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:34.685 [2024-12-09 05:22:16.944293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:27:34.685 [2024-12-09 05:22:16.955415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016efb8b8 01:27:34.685 [2024-12-09 05:22:16.957515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:20886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:34.685 [2024-12-09 05:22:16.957583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:27:34.685 [2024-12-09 05:22:16.968495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016efb048 01:27:34.685 [2024-12-09 05:22:16.970518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:13329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:34.685 [2024-12-09 05:22:16.970589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:27:34.685 [2024-12-09 05:22:16.981667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016efa7d8 01:27:34.685 [2024-12-09 05:22:16.983789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:34.685 [2024-12-09 05:22:16.983879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:27:34.685 [2024-12-09 05:22:16.994646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ef9f68 01:27:34.685 [2024-12-09 05:22:16.996686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:8688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:34.685 [2024-12-09 05:22:16.996761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:27:34.685 [2024-12-09 05:22:17.008048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ef96f8 01:27:34.685 [2024-12-09 05:22:17.009980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:8639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:34.685 [2024-12-09 05:22:17.010052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:27:34.685 [2024-12-09 05:22:17.021367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ef8e88 01:27:34.685 [2024-12-09 05:22:17.023281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:16037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:34.685 [2024-12-09 05:22:17.023365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:27:34.685 [2024-12-09 05:22:17.034056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ef8618 01:27:34.685 [2024-12-09 05:22:17.035998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:14552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:34.685 [2024-12-09 05:22:17.036071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:27:34.685 19039.50 IOPS, 74.37 MiB/s [2024-12-09T05:22:17.141Z] [2024-12-09 05:22:17.048692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cffae0) with pdu=0x200016ef7da8 01:27:34.685 [2024-12-09 05:22:17.050765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:27:34.685 [2024-12-09 05:22:17.050853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:27:34.685 01:27:34.685 Latency(us) 01:27:34.685 [2024-12-09T05:22:17.141Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:27:34.685 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:27:34.685 nvme0n1 : 2.01 19073.74 74.51 0.00 0.00 6705.07 2504.10 25298.61 01:27:34.685 [2024-12-09T05:22:17.141Z] =================================================================================================================== 01:27:34.685 [2024-12-09T05:22:17.141Z] Total : 19073.74 74.51 0.00 0.00 6705.07 2504.10 25298.61 01:27:34.685 { 01:27:34.685 "results": [ 01:27:34.685 { 01:27:34.685 "job": "nvme0n1", 01:27:34.685 "core_mask": "0x2", 01:27:34.685 "workload": "randwrite", 01:27:34.685 "status": "finished", 01:27:34.685 "queue_depth": 128, 01:27:34.685 "io_size": 4096, 01:27:34.685 "runtime": 2.009779, 01:27:34.685 "iops": 19073.738953387412, 01:27:34.685 "mibps": 74.50679278666958, 01:27:34.685 "io_failed": 0, 01:27:34.685 "io_timeout": 0, 01:27:34.685 "avg_latency_us": 6705.068207889151, 01:27:34.685 "min_latency_us": 2504.1048034934497, 01:27:34.685 "max_latency_us": 25298.61310043668 01:27:34.685 } 01:27:34.685 ], 01:27:34.685 "core_count": 1 01:27:34.685 } 01:27:34.685 05:22:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 01:27:34.685 05:22:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 01:27:34.685 05:22:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 01:27:34.685 | .driver_specific 01:27:34.685 | .nvme_error 01:27:34.685 | .status_code 01:27:34.685 | .command_transient_transport_error' 01:27:34.685 05:22:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 01:27:34.944 05:22:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 150 > 0 )) 01:27:34.944 05:22:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80299 01:27:34.944 05:22:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80299 ']' 01:27:34.944 05:22:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80299 01:27:34.944 05:22:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 01:27:34.944 05:22:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:27:34.944 05:22:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80299 01:27:34.944 killing process with pid 80299 01:27:34.944 Received shutdown signal, test time was about 2.000000 seconds 01:27:34.944 01:27:34.944 Latency(us) 01:27:34.944 [2024-12-09T05:22:17.400Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:27:34.944 [2024-12-09T05:22:17.400Z] =================================================================================================================== 01:27:34.944 [2024-12-09T05:22:17.400Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:27:34.944 05:22:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:27:34.944 05:22:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:27:34.944 05:22:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80299' 01:27:34.944 05:22:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80299 01:27:34.944 05:22:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80299 01:27:35.202 05:22:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 01:27:35.202 05:22:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 01:27:35.202 05:22:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 01:27:35.202 05:22:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 01:27:35.202 05:22:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 01:27:35.202 05:22:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80359 01:27:35.202 05:22:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 01:27:35.202 05:22:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80359 /var/tmp/bperf.sock 01:27:35.202 05:22:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80359 ']' 01:27:35.202 05:22:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 01:27:35.202 05:22:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 01:27:35.202 05:22:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:27:35.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:27:35.203 05:22:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 01:27:35.203 05:22:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:27:35.203 [2024-12-09 05:22:17.607612] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:27:35.203 [2024-12-09 05:22:17.607772] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6I/O size of 131072 is greater than zero copy threshold (65536). 01:27:35.203 Zero copy mechanism will not be used. 01:27:35.203 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80359 ] 01:27:35.461 [2024-12-09 05:22:17.758772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:27:35.461 [2024-12-09 05:22:17.811911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:27:35.461 [2024-12-09 05:22:17.853119] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:27:36.028 05:22:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:27:36.028 05:22:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 01:27:36.028 05:22:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 01:27:36.028 05:22:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 01:27:36.287 05:22:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 01:27:36.287 05:22:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:36.287 05:22:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:27:36.287 05:22:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:36.287 05:22:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:27:36.287 05:22:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:27:36.559 nvme0n1 01:27:36.559 05:22:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 01:27:36.559 05:22:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 01:27:36.559 05:22:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:27:36.559 05:22:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:27:36.559 05:22:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 01:27:36.559 05:22:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:27:36.820 I/O size of 131072 is greater than zero copy threshold (65536). 01:27:36.820 Zero copy mechanism will not be used. 01:27:36.820 Running I/O for 2 seconds... 01:27:36.820 [2024-12-09 05:22:19.106184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:36.820 [2024-12-09 05:22:19.106256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:36.820 [2024-12-09 05:22:19.106282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:36.820 [2024-12-09 05:22:19.110655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:36.820 [2024-12-09 05:22:19.110796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:36.820 [2024-12-09 05:22:19.110909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:36.820 [2024-12-09 05:22:19.114179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:36.820 [2024-12-09 05:22:19.114549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:36.820 [2024-12-09 05:22:19.114632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:36.820 [2024-12-09 05:22:19.118114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:36.820 [2024-12-09 05:22:19.118218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:36.820 [2024-12-09 05:22:19.118307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:36.820 [2024-12-09 05:22:19.121952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:36.820 [2024-12-09 05:22:19.122064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:36.820 [2024-12-09 05:22:19.122146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:36.820 [2024-12-09 05:22:19.125783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:36.820 [2024-12-09 05:22:19.125898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:36.820 [2024-12-09 05:22:19.126003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:36.820 [2024-12-09 05:22:19.129852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:36.820 [2024-12-09 05:22:19.129953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:36.820 [2024-12-09 05:22:19.130030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:36.820 [2024-12-09 05:22:19.133763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:36.820 [2024-12-09 05:22:19.133878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:36.820 [2024-12-09 05:22:19.133963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:36.820 [2024-12-09 05:22:19.137583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:36.821 [2024-12-09 05:22:19.137689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:36.821 [2024-12-09 05:22:19.137764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:36.821 [2024-12-09 05:22:19.141540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:36.821 [2024-12-09 05:22:19.141620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:36.821 [2024-12-09 05:22:19.141637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:36.821 [2024-12-09 05:22:19.145232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:36.821 [2024-12-09 05:22:19.145417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:36.821 [2024-12-09 05:22:19.145435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:36.821 [2024-12-09 05:22:19.149506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:36.821 [2024-12-09 05:22:19.149647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:36.821 [2024-12-09 05:22:19.149664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:36.821 [2024-12-09 05:22:19.153423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:36.821 [2024-12-09 05:22:19.153569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:36.821 [2024-12-09 05:22:19.153585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:36.821 [2024-12-09 05:22:19.157330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:36.821 [2024-12-09 05:22:19.157506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:36.821 [2024-12-09 05:22:19.157524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:36.821 [2024-12-09 05:22:19.161454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:36.821 [2024-12-09 05:22:19.161603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:36.821 [2024-12-09 05:22:19.161619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:36.821 [2024-12-09 05:22:19.165440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:36.821 [2024-12-09 05:22:19.165584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:36.821 [2024-12-09 05:22:19.165601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:36.821 [2024-12-09 05:22:19.169460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:36.821 [2024-12-09 05:22:19.169606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:36.821 [2024-12-09 05:22:19.169622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:36.821 [2024-12-09 05:22:19.172799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:36.821 [2024-12-09 05:22:19.173126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:36.821 [2024-12-09 05:22:19.173143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:36.821 [2024-12-09 05:22:19.176289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:36.821 [2024-12-09 05:22:19.176396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:36.821 [2024-12-09 05:22:19.176411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:36.821 [2024-12-09 05:22:19.180079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:36.821 [2024-12-09 05:22:19.180171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:36.821 [2024-12-09 05:22:19.180188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:36.821 [2024-12-09 05:22:19.183837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:36.821 [2024-12-09 05:22:19.183927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:36.821 [2024-12-09 05:22:19.183944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:36.821 [2024-12-09 05:22:19.187806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:36.821 [2024-12-09 05:22:19.187881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:36.821 [2024-12-09 05:22:19.187900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:36.821 [2024-12-09 05:22:19.192042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:36.821 [2024-12-09 05:22:19.192111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:36.821 [2024-12-09 05:22:19.192129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:36.821 [2024-12-09 05:22:19.196216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:36.821 [2024-12-09 05:22:19.196305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:36.821 [2024-12-09 05:22:19.196322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:36.821 [2024-12-09 05:22:19.200201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:36.821 [2024-12-09 05:22:19.200298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:36.821 [2024-12-09 05:22:19.200316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:36.821 [2024-12-09 05:22:19.204105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:36.821 [2024-12-09 05:22:19.204192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:36.821 [2024-12-09 05:22:19.204210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:36.822 [2024-12-09 05:22:19.208039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:36.822 [2024-12-09 05:22:19.208136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:36.822 [2024-12-09 05:22:19.208155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:36.822 [2024-12-09 05:22:19.212052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:36.822 [2024-12-09 05:22:19.212106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:36.822 [2024-12-09 05:22:19.212123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:36.822 [2024-12-09 05:22:19.215407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:36.822 [2024-12-09 05:22:19.215784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:36.822 [2024-12-09 05:22:19.215805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:36.822 [2024-12-09 05:22:19.219054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:36.822 [2024-12-09 05:22:19.219144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:36.822 [2024-12-09 05:22:19.219161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:36.822 [2024-12-09 05:22:19.223090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:36.822 [2024-12-09 05:22:19.223178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:36.822 [2024-12-09 05:22:19.223194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:36.822 [2024-12-09 05:22:19.227011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:36.822 [2024-12-09 05:22:19.227096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:36.822 [2024-12-09 05:22:19.227112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:36.822 [2024-12-09 05:22:19.231077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:36.822 [2024-12-09 05:22:19.231127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:36.822 [2024-12-09 05:22:19.231143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:36.822 [2024-12-09 05:22:19.235164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:36.822 [2024-12-09 05:22:19.235222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:36.822 [2024-12-09 05:22:19.235238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:36.822 [2024-12-09 05:22:19.239100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:36.822 [2024-12-09 05:22:19.239170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:36.822 [2024-12-09 05:22:19.239185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:36.822 [2024-12-09 05:22:19.242796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:36.822 [2024-12-09 05:22:19.242862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:36.822 [2024-12-09 05:22:19.242879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:36.822 [2024-12-09 05:22:19.246762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:36.822 [2024-12-09 05:22:19.246821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:36.822 [2024-12-09 05:22:19.246838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:36.822 [2024-12-09 05:22:19.250724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:36.822 [2024-12-09 05:22:19.250793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:36.822 [2024-12-09 05:22:19.250810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:36.822 [2024-12-09 05:22:19.254636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:36.822 [2024-12-09 05:22:19.254686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:36.822 [2024-12-09 05:22:19.254701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:36.822 [2024-12-09 05:22:19.258400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:36.822 [2024-12-09 05:22:19.258508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:36.822 [2024-12-09 05:22:19.258524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:36.822 [2024-12-09 05:22:19.262112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:36.822 [2024-12-09 05:22:19.262291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:36.822 [2024-12-09 05:22:19.262309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:36.822 [2024-12-09 05:22:19.266325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:36.822 [2024-12-09 05:22:19.266408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:36.822 [2024-12-09 05:22:19.266426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:36.822 [2024-12-09 05:22:19.269829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:36.822 [2024-12-09 05:22:19.270289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:36.822 [2024-12-09 05:22:19.270317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:37.083 [2024-12-09 05:22:19.273726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.083 [2024-12-09 05:22:19.273821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.083 [2024-12-09 05:22:19.273839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:37.083 [2024-12-09 05:22:19.277764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.083 [2024-12-09 05:22:19.277819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.083 [2024-12-09 05:22:19.277835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:37.083 [2024-12-09 05:22:19.281654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.083 [2024-12-09 05:22:19.281726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.083 [2024-12-09 05:22:19.281744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:37.084 [2024-12-09 05:22:19.285652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.084 [2024-12-09 05:22:19.285731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.084 [2024-12-09 05:22:19.285749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:37.084 [2024-12-09 05:22:19.289661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.084 [2024-12-09 05:22:19.289730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.084 [2024-12-09 05:22:19.289749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:37.084 [2024-12-09 05:22:19.293735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.084 [2024-12-09 05:22:19.293840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.084 [2024-12-09 05:22:19.293857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:37.084 [2024-12-09 05:22:19.297612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.084 [2024-12-09 05:22:19.297738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.084 [2024-12-09 05:22:19.297755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:37.084 [2024-12-09 05:22:19.301064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.084 [2024-12-09 05:22:19.301396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.084 [2024-12-09 05:22:19.301414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:37.084 [2024-12-09 05:22:19.305075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.084 [2024-12-09 05:22:19.305559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.084 [2024-12-09 05:22:19.305584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:37.084 [2024-12-09 05:22:19.309270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.084 [2024-12-09 05:22:19.309784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.084 [2024-12-09 05:22:19.309811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:37.084 [2024-12-09 05:22:19.313548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.084 [2024-12-09 05:22:19.313945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.084 [2024-12-09 05:22:19.313967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:37.084 [2024-12-09 05:22:19.317427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.084 [2024-12-09 05:22:19.317522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.084 [2024-12-09 05:22:19.317540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:37.084 [2024-12-09 05:22:19.321172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.084 [2024-12-09 05:22:19.321265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.084 [2024-12-09 05:22:19.321282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:37.084 [2024-12-09 05:22:19.324991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.084 [2024-12-09 05:22:19.325099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.084 [2024-12-09 05:22:19.325115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:37.084 [2024-12-09 05:22:19.328778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.084 [2024-12-09 05:22:19.328843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.084 [2024-12-09 05:22:19.328859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:37.084 [2024-12-09 05:22:19.332705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.084 [2024-12-09 05:22:19.332780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.084 [2024-12-09 05:22:19.332797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:37.084 [2024-12-09 05:22:19.336970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.084 [2024-12-09 05:22:19.337064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.084 [2024-12-09 05:22:19.337082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:37.084 [2024-12-09 05:22:19.341225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.084 [2024-12-09 05:22:19.341279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.084 [2024-12-09 05:22:19.341296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:37.084 [2024-12-09 05:22:19.345235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.084 [2024-12-09 05:22:19.345294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.084 [2024-12-09 05:22:19.345311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:37.084 [2024-12-09 05:22:19.349145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.084 [2024-12-09 05:22:19.349202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.084 [2024-12-09 05:22:19.349218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:37.084 [2024-12-09 05:22:19.353236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.084 [2024-12-09 05:22:19.353295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.084 [2024-12-09 05:22:19.353312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:37.084 [2024-12-09 05:22:19.357186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.084 [2024-12-09 05:22:19.357238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.084 [2024-12-09 05:22:19.357255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:37.084 [2024-12-09 05:22:19.361092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.084 [2024-12-09 05:22:19.361154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.084 [2024-12-09 05:22:19.361170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:37.084 [2024-12-09 05:22:19.365013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.084 [2024-12-09 05:22:19.365073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.084 [2024-12-09 05:22:19.365088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:37.084 [2024-12-09 05:22:19.368852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.084 [2024-12-09 05:22:19.368965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.084 [2024-12-09 05:22:19.368981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:37.084 [2024-12-09 05:22:19.372545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.084 [2024-12-09 05:22:19.372612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.084 [2024-12-09 05:22:19.372628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:37.084 [2024-12-09 05:22:19.376516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.084 [2024-12-09 05:22:19.376579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.084 [2024-12-09 05:22:19.376595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:37.084 [2024-12-09 05:22:19.380499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.084 [2024-12-09 05:22:19.380624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.084 [2024-12-09 05:22:19.380642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:37.084 [2024-12-09 05:22:19.384514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.084 [2024-12-09 05:22:19.384593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.084 [2024-12-09 05:22:19.384610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:37.084 [2024-12-09 05:22:19.388543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.084 [2024-12-09 05:22:19.388674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.084 [2024-12-09 05:22:19.388691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:37.085 [2024-12-09 05:22:19.392407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.085 [2024-12-09 05:22:19.392504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.085 [2024-12-09 05:22:19.392520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:37.085 [2024-12-09 05:22:19.396282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.085 [2024-12-09 05:22:19.396437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.085 [2024-12-09 05:22:19.396454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:37.085 [2024-12-09 05:22:19.400254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.085 [2024-12-09 05:22:19.400455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.085 [2024-12-09 05:22:19.400474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:37.085 [2024-12-09 05:22:19.404269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.085 [2024-12-09 05:22:19.404423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.085 [2024-12-09 05:22:19.404439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:37.085 [2024-12-09 05:22:19.408218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.085 [2024-12-09 05:22:19.408373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.085 [2024-12-09 05:22:19.408389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:37.085 [2024-12-09 05:22:19.412279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.085 [2024-12-09 05:22:19.412458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.085 [2024-12-09 05:22:19.412475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:37.085 [2024-12-09 05:22:19.416443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.085 [2024-12-09 05:22:19.416615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.085 [2024-12-09 05:22:19.416631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:37.085 [2024-12-09 05:22:19.419825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.085 [2024-12-09 05:22:19.420148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.085 [2024-12-09 05:22:19.420166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:37.085 [2024-12-09 05:22:19.423690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.085 [2024-12-09 05:22:19.424126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.085 [2024-12-09 05:22:19.424148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:37.085 [2024-12-09 05:22:19.427417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.085 [2024-12-09 05:22:19.427476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.085 [2024-12-09 05:22:19.427493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:37.085 [2024-12-09 05:22:19.431218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.085 [2024-12-09 05:22:19.431308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.085 [2024-12-09 05:22:19.431335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:37.085 [2024-12-09 05:22:19.434993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.085 [2024-12-09 05:22:19.435099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.085 [2024-12-09 05:22:19.435117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:37.085 [2024-12-09 05:22:19.438893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.085 [2024-12-09 05:22:19.438957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.085 [2024-12-09 05:22:19.438974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:37.085 [2024-12-09 05:22:19.442713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.085 [2024-12-09 05:22:19.442789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.085 [2024-12-09 05:22:19.442807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:37.085 [2024-12-09 05:22:19.446833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.085 [2024-12-09 05:22:19.446890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.085 [2024-12-09 05:22:19.446907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:37.085 [2024-12-09 05:22:19.450867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.085 [2024-12-09 05:22:19.450939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.085 [2024-12-09 05:22:19.450957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:37.085 [2024-12-09 05:22:19.455106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.085 [2024-12-09 05:22:19.455160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.085 [2024-12-09 05:22:19.455178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:37.085 [2024-12-09 05:22:19.459141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.085 [2024-12-09 05:22:19.459195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.085 [2024-12-09 05:22:19.459221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:37.085 [2024-12-09 05:22:19.463121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.085 [2024-12-09 05:22:19.463169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.085 [2024-12-09 05:22:19.463185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:37.085 [2024-12-09 05:22:19.467014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.085 [2024-12-09 05:22:19.467067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.085 [2024-12-09 05:22:19.467083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:37.085 [2024-12-09 05:22:19.470914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.085 [2024-12-09 05:22:19.470987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.085 [2024-12-09 05:22:19.471004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:37.085 [2024-12-09 05:22:19.474921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.085 [2024-12-09 05:22:19.474993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.085 [2024-12-09 05:22:19.475010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:37.085 [2024-12-09 05:22:19.478687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.085 [2024-12-09 05:22:19.478778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.085 [2024-12-09 05:22:19.478794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:37.085 [2024-12-09 05:22:19.482556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.085 [2024-12-09 05:22:19.482650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.085 [2024-12-09 05:22:19.482666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:37.085 [2024-12-09 05:22:19.486452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.085 [2024-12-09 05:22:19.486512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.085 [2024-12-09 05:22:19.486528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:37.085 [2024-12-09 05:22:19.490196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.085 [2024-12-09 05:22:19.490287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.085 [2024-12-09 05:22:19.490304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:37.085 [2024-12-09 05:22:19.493888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.085 [2024-12-09 05:22:19.493975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.085 [2024-12-09 05:22:19.493990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:37.085 [2024-12-09 05:22:19.497683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.086 [2024-12-09 05:22:19.497874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.086 [2024-12-09 05:22:19.497890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:37.086 [2024-12-09 05:22:19.501522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.086 [2024-12-09 05:22:19.501651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.086 [2024-12-09 05:22:19.501667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:37.086 [2024-12-09 05:22:19.505327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.086 [2024-12-09 05:22:19.505476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.086 [2024-12-09 05:22:19.505491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:37.086 [2024-12-09 05:22:19.509325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.086 [2024-12-09 05:22:19.509469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.086 [2024-12-09 05:22:19.509488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:37.086 [2024-12-09 05:22:19.513243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.086 [2024-12-09 05:22:19.513405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.086 [2024-12-09 05:22:19.513425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:37.086 [2024-12-09 05:22:19.517228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.086 [2024-12-09 05:22:19.517390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.086 [2024-12-09 05:22:19.517497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:37.086 [2024-12-09 05:22:19.521090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.086 [2024-12-09 05:22:19.521260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.086 [2024-12-09 05:22:19.521355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:37.086 [2024-12-09 05:22:19.525039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.086 [2024-12-09 05:22:19.525192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.086 [2024-12-09 05:22:19.525258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:37.086 [2024-12-09 05:22:19.528390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.086 [2024-12-09 05:22:19.528739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.086 [2024-12-09 05:22:19.528816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:37.086 [2024-12-09 05:22:19.531984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.086 [2024-12-09 05:22:19.532098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.086 [2024-12-09 05:22:19.532171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:37.086 [2024-12-09 05:22:19.535680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.086 [2024-12-09 05:22:19.535791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.086 [2024-12-09 05:22:19.535864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:37.347 [2024-12-09 05:22:19.539395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.347 [2024-12-09 05:22:19.539493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.347 [2024-12-09 05:22:19.539568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:37.347 [2024-12-09 05:22:19.543095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.347 [2024-12-09 05:22:19.543152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.347 [2024-12-09 05:22:19.543169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:37.347 [2024-12-09 05:22:19.546729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.347 [2024-12-09 05:22:19.546812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.347 [2024-12-09 05:22:19.546828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:37.347 [2024-12-09 05:22:19.550475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.347 [2024-12-09 05:22:19.550556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.347 [2024-12-09 05:22:19.550572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:37.347 [2024-12-09 05:22:19.554079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.347 [2024-12-09 05:22:19.554164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.347 [2024-12-09 05:22:19.554185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:37.347 [2024-12-09 05:22:19.557822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.347 [2024-12-09 05:22:19.558000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.347 [2024-12-09 05:22:19.558016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:37.347 [2024-12-09 05:22:19.561248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.347 [2024-12-09 05:22:19.561585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.347 [2024-12-09 05:22:19.561601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:37.347 [2024-12-09 05:22:19.564754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.347 [2024-12-09 05:22:19.564843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.347 [2024-12-09 05:22:19.564858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:37.347 [2024-12-09 05:22:19.568434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.347 [2024-12-09 05:22:19.568485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.347 [2024-12-09 05:22:19.568501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:37.347 [2024-12-09 05:22:19.571936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.347 [2024-12-09 05:22:19.572049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.347 [2024-12-09 05:22:19.572066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:37.347 [2024-12-09 05:22:19.575715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.347 [2024-12-09 05:22:19.575793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.347 [2024-12-09 05:22:19.575809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:37.347 [2024-12-09 05:22:19.579441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.347 [2024-12-09 05:22:19.579518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.347 [2024-12-09 05:22:19.579534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:37.347 [2024-12-09 05:22:19.583152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.347 [2024-12-09 05:22:19.583286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.347 [2024-12-09 05:22:19.583303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:37.347 [2024-12-09 05:22:19.586987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.347 [2024-12-09 05:22:19.587092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.347 [2024-12-09 05:22:19.587108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:37.347 [2024-12-09 05:22:19.590759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.347 [2024-12-09 05:22:19.590941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.347 [2024-12-09 05:22:19.590958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:37.347 [2024-12-09 05:22:19.594780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.347 [2024-12-09 05:22:19.594927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.347 [2024-12-09 05:22:19.594944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:37.347 [2024-12-09 05:22:19.598711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.347 [2024-12-09 05:22:19.598859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.348 [2024-12-09 05:22:19.598875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:37.348 [2024-12-09 05:22:19.602558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.348 [2024-12-09 05:22:19.602611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.348 [2024-12-09 05:22:19.602628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:37.348 [2024-12-09 05:22:19.605760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.348 [2024-12-09 05:22:19.606194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.348 [2024-12-09 05:22:19.606221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:37.348 [2024-12-09 05:22:19.609514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.348 [2024-12-09 05:22:19.609590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.348 [2024-12-09 05:22:19.609606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:37.348 [2024-12-09 05:22:19.613185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.348 [2024-12-09 05:22:19.613274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.348 [2024-12-09 05:22:19.613290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:37.348 [2024-12-09 05:22:19.616863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.348 [2024-12-09 05:22:19.616953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.348 [2024-12-09 05:22:19.616969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:37.348 [2024-12-09 05:22:19.620647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.348 [2024-12-09 05:22:19.620701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.348 [2024-12-09 05:22:19.620718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:37.348 [2024-12-09 05:22:19.624460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.348 [2024-12-09 05:22:19.624516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.348 [2024-12-09 05:22:19.624532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:37.348 [2024-12-09 05:22:19.628151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.348 [2024-12-09 05:22:19.628279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.348 [2024-12-09 05:22:19.628297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:37.348 [2024-12-09 05:22:19.631980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.348 [2024-12-09 05:22:19.632114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.348 [2024-12-09 05:22:19.632130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:37.348 [2024-12-09 05:22:19.635840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.348 [2024-12-09 05:22:19.636028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.348 [2024-12-09 05:22:19.636045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:37.348 [2024-12-09 05:22:19.639421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.348 [2024-12-09 05:22:19.639662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.348 [2024-12-09 05:22:19.639684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:37.348 [2024-12-09 05:22:19.642863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.348 [2024-12-09 05:22:19.642955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.348 [2024-12-09 05:22:19.642988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:37.348 [2024-12-09 05:22:19.646721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.348 [2024-12-09 05:22:19.646775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.348 [2024-12-09 05:22:19.646792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:37.348 [2024-12-09 05:22:19.650466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.348 [2024-12-09 05:22:19.650518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.348 [2024-12-09 05:22:19.650534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:37.348 [2024-12-09 05:22:19.654154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.348 [2024-12-09 05:22:19.654212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.348 [2024-12-09 05:22:19.654228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:37.348 [2024-12-09 05:22:19.657905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.348 [2024-12-09 05:22:19.658033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.348 [2024-12-09 05:22:19.658050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:37.348 [2024-12-09 05:22:19.661636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.348 [2024-12-09 05:22:19.661691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.348 [2024-12-09 05:22:19.661708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:37.348 [2024-12-09 05:22:19.665352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.348 [2024-12-09 05:22:19.665446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.348 [2024-12-09 05:22:19.665463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:37.348 [2024-12-09 05:22:19.669177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.348 [2024-12-09 05:22:19.669371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.348 [2024-12-09 05:22:19.669388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:37.348 [2024-12-09 05:22:19.673134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.348 [2024-12-09 05:22:19.673280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.348 [2024-12-09 05:22:19.673297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:37.348 [2024-12-09 05:22:19.677072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.348 [2024-12-09 05:22:19.677213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.348 [2024-12-09 05:22:19.677229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:37.348 [2024-12-09 05:22:19.680561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.348 [2024-12-09 05:22:19.680895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.348 [2024-12-09 05:22:19.680911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:37.348 [2024-12-09 05:22:19.684145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.348 [2024-12-09 05:22:19.684230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.348 [2024-12-09 05:22:19.684245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:37.348 [2024-12-09 05:22:19.687935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.348 [2024-12-09 05:22:19.688034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.348 [2024-12-09 05:22:19.688049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:37.348 [2024-12-09 05:22:19.691838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.348 [2024-12-09 05:22:19.691943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.348 [2024-12-09 05:22:19.692021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:37.348 [2024-12-09 05:22:19.695542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.348 [2024-12-09 05:22:19.695646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.348 [2024-12-09 05:22:19.695720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:37.348 [2024-12-09 05:22:19.699461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.348 [2024-12-09 05:22:19.699565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.349 [2024-12-09 05:22:19.699635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:37.349 [2024-12-09 05:22:19.703186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.349 [2024-12-09 05:22:19.703311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.349 [2024-12-09 05:22:19.703423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:37.349 [2024-12-09 05:22:19.707248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.349 [2024-12-09 05:22:19.707380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.349 [2024-12-09 05:22:19.707455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:37.349 [2024-12-09 05:22:19.711173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.349 [2024-12-09 05:22:19.711371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.349 [2024-12-09 05:22:19.711446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:37.349 [2024-12-09 05:22:19.715041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.349 [2024-12-09 05:22:19.715216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.349 [2024-12-09 05:22:19.715292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:37.349 [2024-12-09 05:22:19.719190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.349 [2024-12-09 05:22:19.719391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.349 [2024-12-09 05:22:19.719474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:37.349 [2024-12-09 05:22:19.723292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.349 [2024-12-09 05:22:19.723455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.349 [2024-12-09 05:22:19.723528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:37.349 [2024-12-09 05:22:19.727486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.349 [2024-12-09 05:22:19.727649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.349 [2024-12-09 05:22:19.727731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:37.349 [2024-12-09 05:22:19.730944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.349 [2024-12-09 05:22:19.731282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.349 [2024-12-09 05:22:19.731363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:37.349 [2024-12-09 05:22:19.734558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.349 [2024-12-09 05:22:19.734659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.349 [2024-12-09 05:22:19.734741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:37.349 [2024-12-09 05:22:19.738496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.349 [2024-12-09 05:22:19.738610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.349 [2024-12-09 05:22:19.738683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:37.349 [2024-12-09 05:22:19.742239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.349 [2024-12-09 05:22:19.742352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.349 [2024-12-09 05:22:19.742421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:37.349 [2024-12-09 05:22:19.746007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.349 [2024-12-09 05:22:19.746118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.349 [2024-12-09 05:22:19.746192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:37.349 [2024-12-09 05:22:19.749818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.349 [2024-12-09 05:22:19.749960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.349 [2024-12-09 05:22:19.750038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:37.349 [2024-12-09 05:22:19.753768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.349 [2024-12-09 05:22:19.753875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.349 [2024-12-09 05:22:19.753953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:37.349 [2024-12-09 05:22:19.757559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.349 [2024-12-09 05:22:19.757661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.349 [2024-12-09 05:22:19.757756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:37.349 [2024-12-09 05:22:19.761425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.349 [2024-12-09 05:22:19.761632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.349 [2024-12-09 05:22:19.761708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:37.349 [2024-12-09 05:22:19.765647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.349 [2024-12-09 05:22:19.765793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.349 [2024-12-09 05:22:19.765862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:37.349 [2024-12-09 05:22:19.769641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.349 [2024-12-09 05:22:19.769796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.349 [2024-12-09 05:22:19.769875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:37.349 [2024-12-09 05:22:19.773665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.349 [2024-12-09 05:22:19.773806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.349 [2024-12-09 05:22:19.773891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:37.349 [2024-12-09 05:22:19.777310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.349 [2024-12-09 05:22:19.777651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.349 [2024-12-09 05:22:19.777728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:37.349 [2024-12-09 05:22:19.781029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.349 [2024-12-09 05:22:19.781133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.349 [2024-12-09 05:22:19.781207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:37.349 [2024-12-09 05:22:19.784871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.349 [2024-12-09 05:22:19.784930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.349 [2024-12-09 05:22:19.784947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:37.349 [2024-12-09 05:22:19.788641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.349 [2024-12-09 05:22:19.788718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.349 [2024-12-09 05:22:19.788734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:37.349 [2024-12-09 05:22:19.792352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.349 [2024-12-09 05:22:19.792448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.349 [2024-12-09 05:22:19.792464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:37.349 [2024-12-09 05:22:19.796107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.349 [2024-12-09 05:22:19.796202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.349 [2024-12-09 05:22:19.796218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:37.609 [2024-12-09 05:22:19.800038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.609 [2024-12-09 05:22:19.800146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.609 [2024-12-09 05:22:19.800179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:37.609 [2024-12-09 05:22:19.803891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.609 [2024-12-09 05:22:19.804094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.609 [2024-12-09 05:22:19.804112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:37.609 [2024-12-09 05:22:19.807747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.609 [2024-12-09 05:22:19.807912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.609 [2024-12-09 05:22:19.807928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:37.609 [2024-12-09 05:22:19.811680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.609 [2024-12-09 05:22:19.811829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.609 [2024-12-09 05:22:19.811845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:37.609 [2024-12-09 05:22:19.815532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.610 [2024-12-09 05:22:19.815674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.610 [2024-12-09 05:22:19.815690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:37.610 [2024-12-09 05:22:19.819372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.610 [2024-12-09 05:22:19.819526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.610 [2024-12-09 05:22:19.819542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:37.610 [2024-12-09 05:22:19.823174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.610 [2024-12-09 05:22:19.823361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.610 [2024-12-09 05:22:19.823378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:37.610 [2024-12-09 05:22:19.827051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.610 [2024-12-09 05:22:19.827187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.610 [2024-12-09 05:22:19.827202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:37.610 [2024-12-09 05:22:19.830911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.610 [2024-12-09 05:22:19.831092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.610 [2024-12-09 05:22:19.831168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:37.610 [2024-12-09 05:22:19.834294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.610 [2024-12-09 05:22:19.834653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.610 [2024-12-09 05:22:19.834731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:37.610 [2024-12-09 05:22:19.838000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.610 [2024-12-09 05:22:19.838098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.610 [2024-12-09 05:22:19.838182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:37.610 [2024-12-09 05:22:19.841669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.610 [2024-12-09 05:22:19.841772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.610 [2024-12-09 05:22:19.841865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:37.610 [2024-12-09 05:22:19.845416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.610 [2024-12-09 05:22:19.845503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.610 [2024-12-09 05:22:19.845577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:37.610 [2024-12-09 05:22:19.849168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.610 [2024-12-09 05:22:19.849262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.610 [2024-12-09 05:22:19.849339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:37.610 [2024-12-09 05:22:19.852935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.610 [2024-12-09 05:22:19.853047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.610 [2024-12-09 05:22:19.853127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:37.610 [2024-12-09 05:22:19.856769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.610 [2024-12-09 05:22:19.856864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.610 [2024-12-09 05:22:19.856952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:37.610 [2024-12-09 05:22:19.860531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.610 [2024-12-09 05:22:19.860630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.610 [2024-12-09 05:22:19.860701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:37.610 [2024-12-09 05:22:19.864138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.610 [2024-12-09 05:22:19.864345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.610 [2024-12-09 05:22:19.864417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:37.610 [2024-12-09 05:22:19.868095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.610 [2024-12-09 05:22:19.868257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.610 [2024-12-09 05:22:19.868340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:37.610 [2024-12-09 05:22:19.872026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.610 [2024-12-09 05:22:19.872180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.610 [2024-12-09 05:22:19.872252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:37.610 [2024-12-09 05:22:19.875416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.610 [2024-12-09 05:22:19.875745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.610 [2024-12-09 05:22:19.875821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:37.610 [2024-12-09 05:22:19.879001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.610 [2024-12-09 05:22:19.879091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.610 [2024-12-09 05:22:19.879161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:37.610 [2024-12-09 05:22:19.882707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.610 [2024-12-09 05:22:19.882801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.610 [2024-12-09 05:22:19.882890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:37.610 [2024-12-09 05:22:19.886617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.610 [2024-12-09 05:22:19.886731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.610 [2024-12-09 05:22:19.886816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:37.610 [2024-12-09 05:22:19.890507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.610 [2024-12-09 05:22:19.890598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.610 [2024-12-09 05:22:19.890671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:37.610 [2024-12-09 05:22:19.894185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.610 [2024-12-09 05:22:19.894278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.610 [2024-12-09 05:22:19.894370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:37.610 [2024-12-09 05:22:19.897888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.610 [2024-12-09 05:22:19.898007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.610 [2024-12-09 05:22:19.898084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:37.610 [2024-12-09 05:22:19.901604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.610 [2024-12-09 05:22:19.901788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.610 [2024-12-09 05:22:19.901860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:37.610 [2024-12-09 05:22:19.905377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.610 [2024-12-09 05:22:19.905480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.610 [2024-12-09 05:22:19.905559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:37.610 [2024-12-09 05:22:19.908982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.610 [2024-12-09 05:22:19.909114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.610 [2024-12-09 05:22:19.909198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:37.610 [2024-12-09 05:22:19.912578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.610 [2024-12-09 05:22:19.912771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.610 [2024-12-09 05:22:19.912844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:37.611 [2024-12-09 05:22:19.916033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.611 [2024-12-09 05:22:19.916377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.611 [2024-12-09 05:22:19.916455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:37.611 [2024-12-09 05:22:19.919734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.611 [2024-12-09 05:22:19.919837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.611 [2024-12-09 05:22:19.919936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:37.611 [2024-12-09 05:22:19.923560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.611 [2024-12-09 05:22:19.923672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.611 [2024-12-09 05:22:19.923744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:37.611 [2024-12-09 05:22:19.927246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.611 [2024-12-09 05:22:19.927359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.611 [2024-12-09 05:22:19.927435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:37.611 [2024-12-09 05:22:19.930969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.611 [2024-12-09 05:22:19.931065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.611 [2024-12-09 05:22:19.931126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:37.611 [2024-12-09 05:22:19.934640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.611 [2024-12-09 05:22:19.934738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.611 [2024-12-09 05:22:19.934828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:37.611 [2024-12-09 05:22:19.938352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.611 [2024-12-09 05:22:19.938450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.611 [2024-12-09 05:22:19.938547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:37.611 [2024-12-09 05:22:19.942125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.611 [2024-12-09 05:22:19.942235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.611 [2024-12-09 05:22:19.942339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:37.611 [2024-12-09 05:22:19.945885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.611 [2024-12-09 05:22:19.946080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.611 [2024-12-09 05:22:19.946160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:37.611 [2024-12-09 05:22:19.949610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.611 [2024-12-09 05:22:19.949961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.611 [2024-12-09 05:22:19.949985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:37.611 [2024-12-09 05:22:19.953526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.611 [2024-12-09 05:22:19.953600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.611 [2024-12-09 05:22:19.953618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:37.611 [2024-12-09 05:22:19.957520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.611 [2024-12-09 05:22:19.957589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.611 [2024-12-09 05:22:19.957608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:37.611 [2024-12-09 05:22:19.961541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.611 [2024-12-09 05:22:19.961611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.611 [2024-12-09 05:22:19.961630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:37.611 [2024-12-09 05:22:19.965598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.611 [2024-12-09 05:22:19.965704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.611 [2024-12-09 05:22:19.965723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:37.611 [2024-12-09 05:22:19.969732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.611 [2024-12-09 05:22:19.969816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.611 [2024-12-09 05:22:19.969835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:37.611 [2024-12-09 05:22:19.973830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.611 [2024-12-09 05:22:19.973923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.611 [2024-12-09 05:22:19.973943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:37.611 [2024-12-09 05:22:19.977915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.611 [2024-12-09 05:22:19.978012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.611 [2024-12-09 05:22:19.978032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:37.611 [2024-12-09 05:22:19.982057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.611 [2024-12-09 05:22:19.982213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.611 [2024-12-09 05:22:19.982231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:37.611 [2024-12-09 05:22:19.985669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.611 [2024-12-09 05:22:19.985998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.611 [2024-12-09 05:22:19.986022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:37.611 [2024-12-09 05:22:19.989563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.611 [2024-12-09 05:22:19.989635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.611 [2024-12-09 05:22:19.989654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:37.611 [2024-12-09 05:22:19.993615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.611 [2024-12-09 05:22:19.993682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.611 [2024-12-09 05:22:19.993700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:37.611 [2024-12-09 05:22:19.997668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.611 [2024-12-09 05:22:19.997739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.611 [2024-12-09 05:22:19.997757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:37.611 [2024-12-09 05:22:20.001731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.611 [2024-12-09 05:22:20.001798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.611 [2024-12-09 05:22:20.001816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:37.611 [2024-12-09 05:22:20.005778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.611 [2024-12-09 05:22:20.005866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.611 [2024-12-09 05:22:20.005885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:37.611 [2024-12-09 05:22:20.009850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.611 [2024-12-09 05:22:20.009985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.611 [2024-12-09 05:22:20.010003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:37.611 [2024-12-09 05:22:20.013902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.611 [2024-12-09 05:22:20.014001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.611 [2024-12-09 05:22:20.014019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:37.611 [2024-12-09 05:22:20.017944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.611 [2024-12-09 05:22:20.018105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.611 [2024-12-09 05:22:20.018123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:37.611 [2024-12-09 05:22:20.022292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.612 [2024-12-09 05:22:20.022457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.612 [2024-12-09 05:22:20.022476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:37.612 [2024-12-09 05:22:20.026638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.612 [2024-12-09 05:22:20.026800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.612 [2024-12-09 05:22:20.026818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:37.612 [2024-12-09 05:22:20.030996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.612 [2024-12-09 05:22:20.031138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.612 [2024-12-09 05:22:20.031156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:37.612 [2024-12-09 05:22:20.035175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.612 [2024-12-09 05:22:20.035360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.612 [2024-12-09 05:22:20.035379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:37.612 [2024-12-09 05:22:20.039289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.612 [2024-12-09 05:22:20.039456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.612 [2024-12-09 05:22:20.039473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:37.612 [2024-12-09 05:22:20.043409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.612 [2024-12-09 05:22:20.043566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.612 [2024-12-09 05:22:20.043584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:37.612 [2024-12-09 05:22:20.047519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.612 [2024-12-09 05:22:20.047689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.612 [2024-12-09 05:22:20.047707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:37.612 [2024-12-09 05:22:20.051576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.612 [2024-12-09 05:22:20.051744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.612 [2024-12-09 05:22:20.051761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:37.612 [2024-12-09 05:22:20.055647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.612 [2024-12-09 05:22:20.055799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.612 [2024-12-09 05:22:20.055817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:37.612 [2024-12-09 05:22:20.059794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.612 [2024-12-09 05:22:20.059951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.612 [2024-12-09 05:22:20.059970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:37.872 [2024-12-09 05:22:20.063517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.872 [2024-12-09 05:22:20.063836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.872 [2024-12-09 05:22:20.063859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:37.872 [2024-12-09 05:22:20.067190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.872 [2024-12-09 05:22:20.067270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.872 [2024-12-09 05:22:20.067288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:37.872 [2024-12-09 05:22:20.071096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.872 [2024-12-09 05:22:20.071148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.872 [2024-12-09 05:22:20.071166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:37.873 [2024-12-09 05:22:20.074946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.873 [2024-12-09 05:22:20.074998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.873 [2024-12-09 05:22:20.075015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:37.873 [2024-12-09 05:22:20.078812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.873 [2024-12-09 05:22:20.078890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.873 [2024-12-09 05:22:20.078908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:37.873 [2024-12-09 05:22:20.082740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.873 [2024-12-09 05:22:20.082816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.873 [2024-12-09 05:22:20.082834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:37.873 [2024-12-09 05:22:20.086780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.873 [2024-12-09 05:22:20.086845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.873 [2024-12-09 05:22:20.086863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:37.873 [2024-12-09 05:22:20.090917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.873 [2024-12-09 05:22:20.090991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.873 [2024-12-09 05:22:20.091009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:37.873 [2024-12-09 05:22:20.095068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.873 [2024-12-09 05:22:20.095125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.873 [2024-12-09 05:22:20.095143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:37.873 [2024-12-09 05:22:20.098778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.873 [2024-12-09 05:22:20.099181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.873 [2024-12-09 05:22:20.099211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:37.873 7982.00 IOPS, 997.75 MiB/s [2024-12-09T05:22:20.329Z] [2024-12-09 05:22:20.104183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.873 [2024-12-09 05:22:20.104245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.873 [2024-12-09 05:22:20.104265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:37.873 [2024-12-09 05:22:20.108560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.873 [2024-12-09 05:22:20.108616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.873 [2024-12-09 05:22:20.108635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:37.873 [2024-12-09 05:22:20.112815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.873 [2024-12-09 05:22:20.112873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.873 [2024-12-09 05:22:20.112890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:37.873 [2024-12-09 05:22:20.116928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.873 [2024-12-09 05:22:20.116985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.873 [2024-12-09 05:22:20.117004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:37.873 [2024-12-09 05:22:20.121030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.873 [2024-12-09 05:22:20.121101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.873 [2024-12-09 05:22:20.121119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:37.873 [2024-12-09 05:22:20.124986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.873 [2024-12-09 05:22:20.125064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.873 [2024-12-09 05:22:20.125081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:37.873 [2024-12-09 05:22:20.128760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.873 [2024-12-09 05:22:20.128832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.873 [2024-12-09 05:22:20.128849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:37.873 [2024-12-09 05:22:20.132735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.873 [2024-12-09 05:22:20.132856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.873 [2024-12-09 05:22:20.132873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:37.873 [2024-12-09 05:22:20.136876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.873 [2024-12-09 05:22:20.136956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.873 [2024-12-09 05:22:20.136975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:37.873 [2024-12-09 05:22:20.141140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.873 [2024-12-09 05:22:20.141220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.873 [2024-12-09 05:22:20.141238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:37.873 [2024-12-09 05:22:20.145494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.873 [2024-12-09 05:22:20.145599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.873 [2024-12-09 05:22:20.145616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:37.873 [2024-12-09 05:22:20.149495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.873 [2024-12-09 05:22:20.149585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.873 [2024-12-09 05:22:20.149602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:37.873 [2024-12-09 05:22:20.153674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.873 [2024-12-09 05:22:20.153725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.873 [2024-12-09 05:22:20.153742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:37.873 [2024-12-09 05:22:20.157334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.873 [2024-12-09 05:22:20.157746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.873 [2024-12-09 05:22:20.157826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:37.873 [2024-12-09 05:22:20.161384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.873 [2024-12-09 05:22:20.161464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.873 [2024-12-09 05:22:20.161483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:37.873 [2024-12-09 05:22:20.165553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.873 [2024-12-09 05:22:20.165622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.873 [2024-12-09 05:22:20.165640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:37.873 [2024-12-09 05:22:20.169634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.873 [2024-12-09 05:22:20.169688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.873 [2024-12-09 05:22:20.169706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:37.873 [2024-12-09 05:22:20.173763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.873 [2024-12-09 05:22:20.173814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.874 [2024-12-09 05:22:20.173831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:37.874 [2024-12-09 05:22:20.178010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.874 [2024-12-09 05:22:20.178064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.874 [2024-12-09 05:22:20.178099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:37.874 [2024-12-09 05:22:20.182476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.874 [2024-12-09 05:22:20.182548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.874 [2024-12-09 05:22:20.182568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:37.874 [2024-12-09 05:22:20.186562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.874 [2024-12-09 05:22:20.186612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.874 [2024-12-09 05:22:20.186629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:37.874 [2024-12-09 05:22:20.190727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.874 [2024-12-09 05:22:20.190784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.874 [2024-12-09 05:22:20.190802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:37.874 [2024-12-09 05:22:20.194653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.874 [2024-12-09 05:22:20.194706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.874 [2024-12-09 05:22:20.194723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:37.874 [2024-12-09 05:22:20.198422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.874 [2024-12-09 05:22:20.198540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.874 [2024-12-09 05:22:20.198557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:37.874 [2024-12-09 05:22:20.202266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.874 [2024-12-09 05:22:20.202419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.874 [2024-12-09 05:22:20.202435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:37.874 [2024-12-09 05:22:20.206032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.874 [2024-12-09 05:22:20.206105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.874 [2024-12-09 05:22:20.206121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:37.874 [2024-12-09 05:22:20.209782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.874 [2024-12-09 05:22:20.209833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.874 [2024-12-09 05:22:20.209849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:37.874 [2024-12-09 05:22:20.212980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.874 [2024-12-09 05:22:20.213355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.874 [2024-12-09 05:22:20.213388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:37.874 [2024-12-09 05:22:20.216496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.874 [2024-12-09 05:22:20.216573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.874 [2024-12-09 05:22:20.216588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:37.874 [2024-12-09 05:22:20.220104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.874 [2024-12-09 05:22:20.220213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.874 [2024-12-09 05:22:20.220236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:37.874 [2024-12-09 05:22:20.223977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.874 [2024-12-09 05:22:20.224082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.874 [2024-12-09 05:22:20.224100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:37.874 [2024-12-09 05:22:20.227957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.874 [2024-12-09 05:22:20.228014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.874 [2024-12-09 05:22:20.228032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:37.874 [2024-12-09 05:22:20.231643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.874 [2024-12-09 05:22:20.231714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.874 [2024-12-09 05:22:20.231730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:37.874 [2024-12-09 05:22:20.235362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.874 [2024-12-09 05:22:20.235411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.874 [2024-12-09 05:22:20.235427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:37.874 [2024-12-09 05:22:20.239054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.874 [2024-12-09 05:22:20.239181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.874 [2024-12-09 05:22:20.239197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:37.874 [2024-12-09 05:22:20.242802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.874 [2024-12-09 05:22:20.242932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.874 [2024-12-09 05:22:20.242948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:37.874 [2024-12-09 05:22:20.246141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.874 [2024-12-09 05:22:20.246474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.874 [2024-12-09 05:22:20.246499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:37.874 [2024-12-09 05:22:20.249726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.874 [2024-12-09 05:22:20.249774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.874 [2024-12-09 05:22:20.249789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:37.874 [2024-12-09 05:22:20.253471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.874 [2024-12-09 05:22:20.253521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.874 [2024-12-09 05:22:20.253537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:37.874 [2024-12-09 05:22:20.257068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.874 [2024-12-09 05:22:20.257117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.874 [2024-12-09 05:22:20.257134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:37.874 [2024-12-09 05:22:20.260827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.874 [2024-12-09 05:22:20.260880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.874 [2024-12-09 05:22:20.260896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:37.874 [2024-12-09 05:22:20.264492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.874 [2024-12-09 05:22:20.264617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.874 [2024-12-09 05:22:20.264633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:37.874 [2024-12-09 05:22:20.268094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.874 [2024-12-09 05:22:20.268185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.874 [2024-12-09 05:22:20.268203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:37.874 [2024-12-09 05:22:20.271852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.874 [2024-12-09 05:22:20.271911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.874 [2024-12-09 05:22:20.271927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:37.874 [2024-12-09 05:22:20.275580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.875 [2024-12-09 05:22:20.275653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.875 [2024-12-09 05:22:20.275669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:37.875 [2024-12-09 05:22:20.279190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.875 [2024-12-09 05:22:20.279385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.875 [2024-12-09 05:22:20.279417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:37.875 [2024-12-09 05:22:20.282573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.875 [2024-12-09 05:22:20.282890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.875 [2024-12-09 05:22:20.282910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:37.875 [2024-12-09 05:22:20.286118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.875 [2024-12-09 05:22:20.286223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.875 [2024-12-09 05:22:20.286240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:37.875 [2024-12-09 05:22:20.289933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.875 [2024-12-09 05:22:20.290036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.875 [2024-12-09 05:22:20.290052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:37.875 [2024-12-09 05:22:20.293687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.875 [2024-12-09 05:22:20.293795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.875 [2024-12-09 05:22:20.293872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:37.875 [2024-12-09 05:22:20.297432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.875 [2024-12-09 05:22:20.297534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.875 [2024-12-09 05:22:20.297604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:37.875 [2024-12-09 05:22:20.301079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.875 [2024-12-09 05:22:20.301188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.875 [2024-12-09 05:22:20.301260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:37.875 [2024-12-09 05:22:20.304795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.875 [2024-12-09 05:22:20.304911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.875 [2024-12-09 05:22:20.304981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:37.875 [2024-12-09 05:22:20.308600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.875 [2024-12-09 05:22:20.308724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.875 [2024-12-09 05:22:20.308798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:37.875 [2024-12-09 05:22:20.312441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.875 [2024-12-09 05:22:20.312628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.875 [2024-12-09 05:22:20.312696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:37.875 [2024-12-09 05:22:20.316241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.875 [2024-12-09 05:22:20.316359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.875 [2024-12-09 05:22:20.316440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:37.875 [2024-12-09 05:22:20.319948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.875 [2024-12-09 05:22:20.320068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.875 [2024-12-09 05:22:20.320139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:37.875 [2024-12-09 05:22:20.323616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:37.875 [2024-12-09 05:22:20.323805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:37.875 [2024-12-09 05:22:20.323875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:38.135 [2024-12-09 05:22:20.327370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.135 [2024-12-09 05:22:20.327523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.135 [2024-12-09 05:22:20.327594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:38.135 [2024-12-09 05:22:20.331165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.135 [2024-12-09 05:22:20.331338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.135 [2024-12-09 05:22:20.331407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:38.135 [2024-12-09 05:22:20.334456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.135 [2024-12-09 05:22:20.334815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.135 [2024-12-09 05:22:20.334892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:38.135 [2024-12-09 05:22:20.338096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.135 [2024-12-09 05:22:20.338191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.135 [2024-12-09 05:22:20.338258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:38.135 [2024-12-09 05:22:20.341797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.135 [2024-12-09 05:22:20.341906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.135 [2024-12-09 05:22:20.341987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:38.135 [2024-12-09 05:22:20.345382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.135 [2024-12-09 05:22:20.345488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.135 [2024-12-09 05:22:20.345555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:38.135 [2024-12-09 05:22:20.349269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.135 [2024-12-09 05:22:20.349403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.135 [2024-12-09 05:22:20.349506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:38.135 [2024-12-09 05:22:20.353120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.136 [2024-12-09 05:22:20.353231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.136 [2024-12-09 05:22:20.353315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:38.136 [2024-12-09 05:22:20.356817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.136 [2024-12-09 05:22:20.356937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.136 [2024-12-09 05:22:20.357016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:38.136 [2024-12-09 05:22:20.360490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.136 [2024-12-09 05:22:20.360608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.136 [2024-12-09 05:22:20.360687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:38.136 [2024-12-09 05:22:20.364147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.136 [2024-12-09 05:22:20.364268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.136 [2024-12-09 05:22:20.364384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:38.136 [2024-12-09 05:22:20.367927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.136 [2024-12-09 05:22:20.368121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.136 [2024-12-09 05:22:20.368202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:38.136 [2024-12-09 05:22:20.371806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.136 [2024-12-09 05:22:20.371979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.136 [2024-12-09 05:22:20.372052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:38.136 [2024-12-09 05:22:20.375601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.136 [2024-12-09 05:22:20.375755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.136 [2024-12-09 05:22:20.375825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:38.136 [2024-12-09 05:22:20.378950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.136 [2024-12-09 05:22:20.379315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.136 [2024-12-09 05:22:20.379421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:38.136 [2024-12-09 05:22:20.382863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.136 [2024-12-09 05:22:20.382962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.136 [2024-12-09 05:22:20.383060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:38.136 [2024-12-09 05:22:20.386958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.136 [2024-12-09 05:22:20.387079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.136 [2024-12-09 05:22:20.387164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:38.136 [2024-12-09 05:22:20.390928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.136 [2024-12-09 05:22:20.390983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.136 [2024-12-09 05:22:20.391001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:38.136 [2024-12-09 05:22:20.394860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.136 [2024-12-09 05:22:20.394915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.136 [2024-12-09 05:22:20.394932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:38.136 [2024-12-09 05:22:20.398812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.136 [2024-12-09 05:22:20.398888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.136 [2024-12-09 05:22:20.398904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:38.136 [2024-12-09 05:22:20.402831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.136 [2024-12-09 05:22:20.402923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.136 [2024-12-09 05:22:20.402941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:38.136 [2024-12-09 05:22:20.406928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.136 [2024-12-09 05:22:20.407020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.136 [2024-12-09 05:22:20.407037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:38.136 [2024-12-09 05:22:20.410859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.136 [2024-12-09 05:22:20.410914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.136 [2024-12-09 05:22:20.410931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:38.136 [2024-12-09 05:22:20.414315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.136 [2024-12-09 05:22:20.414778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.136 [2024-12-09 05:22:20.414803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:38.136 [2024-12-09 05:22:20.417939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.136 [2024-12-09 05:22:20.418057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.136 [2024-12-09 05:22:20.418073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:38.136 [2024-12-09 05:22:20.421660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.136 [2024-12-09 05:22:20.421713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.136 [2024-12-09 05:22:20.421729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:38.136 [2024-12-09 05:22:20.425388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.136 [2024-12-09 05:22:20.425439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.136 [2024-12-09 05:22:20.425455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:38.136 [2024-12-09 05:22:20.429059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.136 [2024-12-09 05:22:20.429167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.136 [2024-12-09 05:22:20.429183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:38.136 [2024-12-09 05:22:20.432892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.136 [2024-12-09 05:22:20.432979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.136 [2024-12-09 05:22:20.432995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:38.136 [2024-12-09 05:22:20.436615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.136 [2024-12-09 05:22:20.436694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.136 [2024-12-09 05:22:20.436710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:38.136 [2024-12-09 05:22:20.440206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.136 [2024-12-09 05:22:20.440335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.136 [2024-12-09 05:22:20.440352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:38.136 [2024-12-09 05:22:20.443894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.136 [2024-12-09 05:22:20.444011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.136 [2024-12-09 05:22:20.444027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:38.136 [2024-12-09 05:22:20.447102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.136 [2024-12-09 05:22:20.447403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.136 [2024-12-09 05:22:20.447506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:38.136 [2024-12-09 05:22:20.450660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.136 [2024-12-09 05:22:20.450750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.136 [2024-12-09 05:22:20.450846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:38.136 [2024-12-09 05:22:20.454400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.137 [2024-12-09 05:22:20.454466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.137 [2024-12-09 05:22:20.454482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:38.137 [2024-12-09 05:22:20.458119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.137 [2024-12-09 05:22:20.458208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.137 [2024-12-09 05:22:20.458241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:38.137 [2024-12-09 05:22:20.461903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.137 [2024-12-09 05:22:20.462034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.137 [2024-12-09 05:22:20.462051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:38.137 [2024-12-09 05:22:20.465667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.137 [2024-12-09 05:22:20.465763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.137 [2024-12-09 05:22:20.465780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:38.137 [2024-12-09 05:22:20.469363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.137 [2024-12-09 05:22:20.469459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.137 [2024-12-09 05:22:20.469476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:38.137 [2024-12-09 05:22:20.473141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.137 [2024-12-09 05:22:20.473245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.137 [2024-12-09 05:22:20.473261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:38.137 [2024-12-09 05:22:20.477024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.137 [2024-12-09 05:22:20.477212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.137 [2024-12-09 05:22:20.477229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:38.137 [2024-12-09 05:22:20.481139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.137 [2024-12-09 05:22:20.481314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.137 [2024-12-09 05:22:20.481330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:38.137 [2024-12-09 05:22:20.484789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.137 [2024-12-09 05:22:20.485112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.137 [2024-12-09 05:22:20.485128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:38.137 [2024-12-09 05:22:20.488630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.137 [2024-12-09 05:22:20.488703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.137 [2024-12-09 05:22:20.488721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:38.137 [2024-12-09 05:22:20.492719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.137 [2024-12-09 05:22:20.492784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.137 [2024-12-09 05:22:20.492801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:38.137 [2024-12-09 05:22:20.496845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.137 [2024-12-09 05:22:20.496906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.137 [2024-12-09 05:22:20.496923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:38.137 [2024-12-09 05:22:20.501078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.137 [2024-12-09 05:22:20.501132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.137 [2024-12-09 05:22:20.501149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:38.137 [2024-12-09 05:22:20.505041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.137 [2024-12-09 05:22:20.505095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.137 [2024-12-09 05:22:20.505113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:38.137 [2024-12-09 05:22:20.508927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.137 [2024-12-09 05:22:20.508983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.137 [2024-12-09 05:22:20.509000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:38.137 [2024-12-09 05:22:20.512898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.137 [2024-12-09 05:22:20.512973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.137 [2024-12-09 05:22:20.512990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:38.137 [2024-12-09 05:22:20.516808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.137 [2024-12-09 05:22:20.516870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.137 [2024-12-09 05:22:20.516887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:38.137 [2024-12-09 05:22:20.520737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.137 [2024-12-09 05:22:20.520902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.137 [2024-12-09 05:22:20.520918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:38.137 [2024-12-09 05:22:20.524081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.137 [2024-12-09 05:22:20.524416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.137 [2024-12-09 05:22:20.524435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:38.137 [2024-12-09 05:22:20.527791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.137 [2024-12-09 05:22:20.527883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.137 [2024-12-09 05:22:20.527903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:38.137 [2024-12-09 05:22:20.531634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.137 [2024-12-09 05:22:20.531700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.137 [2024-12-09 05:22:20.531717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:38.137 [2024-12-09 05:22:20.535285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.137 [2024-12-09 05:22:20.535405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.137 [2024-12-09 05:22:20.535423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:38.137 [2024-12-09 05:22:20.539296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.137 [2024-12-09 05:22:20.539407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.137 [2024-12-09 05:22:20.539425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:38.137 [2024-12-09 05:22:20.543095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.137 [2024-12-09 05:22:20.543201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.137 [2024-12-09 05:22:20.543225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:38.137 [2024-12-09 05:22:20.547033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.137 [2024-12-09 05:22:20.547165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.137 [2024-12-09 05:22:20.547186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:38.137 [2024-12-09 05:22:20.550888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.137 [2024-12-09 05:22:20.550980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.137 [2024-12-09 05:22:20.550996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:38.137 [2024-12-09 05:22:20.555004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.137 [2024-12-09 05:22:20.555055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.137 [2024-12-09 05:22:20.555071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:38.137 [2024-12-09 05:22:20.558913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.137 [2024-12-09 05:22:20.558985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.138 [2024-12-09 05:22:20.559001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:38.138 [2024-12-09 05:22:20.562821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.138 [2024-12-09 05:22:20.562949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.138 [2024-12-09 05:22:20.562965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:38.138 [2024-12-09 05:22:20.566173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.138 [2024-12-09 05:22:20.566508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.138 [2024-12-09 05:22:20.566525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:38.138 [2024-12-09 05:22:20.569763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.138 [2024-12-09 05:22:20.569852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.138 [2024-12-09 05:22:20.569885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:38.138 [2024-12-09 05:22:20.573653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.138 [2024-12-09 05:22:20.573708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.138 [2024-12-09 05:22:20.573724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:38.138 [2024-12-09 05:22:20.577440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.138 [2024-12-09 05:22:20.577507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.138 [2024-12-09 05:22:20.577524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:38.138 [2024-12-09 05:22:20.581311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.138 [2024-12-09 05:22:20.581432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.138 [2024-12-09 05:22:20.581448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:38.398 [2024-12-09 05:22:20.585172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.398 [2024-12-09 05:22:20.585251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.398 [2024-12-09 05:22:20.585284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:38.398 [2024-12-09 05:22:20.589182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.398 [2024-12-09 05:22:20.589226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.398 [2024-12-09 05:22:20.589242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:38.398 [2024-12-09 05:22:20.593041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.398 [2024-12-09 05:22:20.593137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.398 [2024-12-09 05:22:20.593241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:38.398 [2024-12-09 05:22:20.597065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.398 [2024-12-09 05:22:20.597234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.398 [2024-12-09 05:22:20.597305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:38.398 [2024-12-09 05:22:20.600938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.398 [2024-12-09 05:22:20.601096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.398 [2024-12-09 05:22:20.601185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:38.398 [2024-12-09 05:22:20.604811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.398 [2024-12-09 05:22:20.604938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.398 [2024-12-09 05:22:20.605023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:38.398 [2024-12-09 05:22:20.608678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.398 [2024-12-09 05:22:20.608789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.398 [2024-12-09 05:22:20.608875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:38.398 [2024-12-09 05:22:20.612599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.398 [2024-12-09 05:22:20.612693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.398 [2024-12-09 05:22:20.612794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:38.398 [2024-12-09 05:22:20.616527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.398 [2024-12-09 05:22:20.616677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.398 [2024-12-09 05:22:20.616766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:38.398 [2024-12-09 05:22:20.620309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.398 [2024-12-09 05:22:20.620548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.399 [2024-12-09 05:22:20.620616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:38.399 [2024-12-09 05:22:20.624155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.399 [2024-12-09 05:22:20.624352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.399 [2024-12-09 05:22:20.624425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:38.399 [2024-12-09 05:22:20.628137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.399 [2024-12-09 05:22:20.628306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.399 [2024-12-09 05:22:20.628390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:38.399 [2024-12-09 05:22:20.632072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.399 [2024-12-09 05:22:20.632247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.399 [2024-12-09 05:22:20.632309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:38.399 [2024-12-09 05:22:20.636050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.399 [2024-12-09 05:22:20.636226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.399 [2024-12-09 05:22:20.636302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:38.399 [2024-12-09 05:22:20.639972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.399 [2024-12-09 05:22:20.640143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.399 [2024-12-09 05:22:20.640219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:38.399 [2024-12-09 05:22:20.643901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.399 [2024-12-09 05:22:20.644055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.399 [2024-12-09 05:22:20.644127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:38.399 [2024-12-09 05:22:20.647842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.399 [2024-12-09 05:22:20.648003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.399 [2024-12-09 05:22:20.648081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:38.399 [2024-12-09 05:22:20.651779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.399 [2024-12-09 05:22:20.651933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.399 [2024-12-09 05:22:20.652013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:38.399 [2024-12-09 05:22:20.655160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.399 [2024-12-09 05:22:20.655522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.399 [2024-12-09 05:22:20.655613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:38.399 [2024-12-09 05:22:20.658799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.399 [2024-12-09 05:22:20.658895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.399 [2024-12-09 05:22:20.658967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:38.399 [2024-12-09 05:22:20.662584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.399 [2024-12-09 05:22:20.662677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.399 [2024-12-09 05:22:20.662750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:38.399 [2024-12-09 05:22:20.666353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.399 [2024-12-09 05:22:20.666448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.399 [2024-12-09 05:22:20.666530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:38.399 [2024-12-09 05:22:20.670092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.399 [2024-12-09 05:22:20.670185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.399 [2024-12-09 05:22:20.670284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:38.399 [2024-12-09 05:22:20.673793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.399 [2024-12-09 05:22:20.673947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.399 [2024-12-09 05:22:20.674035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:38.399 [2024-12-09 05:22:20.677498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.399 [2024-12-09 05:22:20.677662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.399 [2024-12-09 05:22:20.677743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:38.399 [2024-12-09 05:22:20.681318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.399 [2024-12-09 05:22:20.681450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.399 [2024-12-09 05:22:20.681530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:38.399 [2024-12-09 05:22:20.685060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.399 [2024-12-09 05:22:20.685238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.399 [2024-12-09 05:22:20.685336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:38.399 [2024-12-09 05:22:20.688532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.399 [2024-12-09 05:22:20.688853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.399 [2024-12-09 05:22:20.688926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:38.399 [2024-12-09 05:22:20.692123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.399 [2024-12-09 05:22:20.692252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.399 [2024-12-09 05:22:20.692336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:38.399 [2024-12-09 05:22:20.695877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.399 [2024-12-09 05:22:20.695990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.399 [2024-12-09 05:22:20.696064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:38.399 [2024-12-09 05:22:20.699656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.399 [2024-12-09 05:22:20.699767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.399 [2024-12-09 05:22:20.699840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:38.399 [2024-12-09 05:22:20.703391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.399 [2024-12-09 05:22:20.703498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.399 [2024-12-09 05:22:20.703573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:38.399 [2024-12-09 05:22:20.707092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.399 [2024-12-09 05:22:20.707217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.399 [2024-12-09 05:22:20.707310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:38.399 [2024-12-09 05:22:20.710827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.399 [2024-12-09 05:22:20.710932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.399 [2024-12-09 05:22:20.710992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:38.399 [2024-12-09 05:22:20.714608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.399 [2024-12-09 05:22:20.714709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.399 [2024-12-09 05:22:20.714726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:38.399 [2024-12-09 05:22:20.718250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.399 [2024-12-09 05:22:20.718449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.399 [2024-12-09 05:22:20.718465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:38.399 [2024-12-09 05:22:20.722156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.399 [2024-12-09 05:22:20.722303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.399 [2024-12-09 05:22:20.722323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:38.399 [2024-12-09 05:22:20.726023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.399 [2024-12-09 05:22:20.726194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.399 [2024-12-09 05:22:20.726211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:38.399 [2024-12-09 05:22:20.729925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.400 [2024-12-09 05:22:20.730073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.400 [2024-12-09 05:22:20.730092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:38.400 [2024-12-09 05:22:20.733297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.400 [2024-12-09 05:22:20.733659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.400 [2024-12-09 05:22:20.733745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:38.400 [2024-12-09 05:22:20.737041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.400 [2024-12-09 05:22:20.737135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.400 [2024-12-09 05:22:20.737238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:38.400 [2024-12-09 05:22:20.740796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.400 [2024-12-09 05:22:20.740917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.400 [2024-12-09 05:22:20.741000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:38.400 [2024-12-09 05:22:20.744562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.400 [2024-12-09 05:22:20.744655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.400 [2024-12-09 05:22:20.744724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:38.400 [2024-12-09 05:22:20.748255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.400 [2024-12-09 05:22:20.748415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.400 [2024-12-09 05:22:20.748515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:38.400 [2024-12-09 05:22:20.752114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.400 [2024-12-09 05:22:20.752254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.400 [2024-12-09 05:22:20.752337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:38.400 [2024-12-09 05:22:20.755951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.400 [2024-12-09 05:22:20.756057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.400 [2024-12-09 05:22:20.756129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:38.400 [2024-12-09 05:22:20.759806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.400 [2024-12-09 05:22:20.759935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.400 [2024-12-09 05:22:20.760009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:38.400 [2024-12-09 05:22:20.763676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.400 [2024-12-09 05:22:20.763792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.400 [2024-12-09 05:22:20.763885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:38.400 [2024-12-09 05:22:20.767521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.400 [2024-12-09 05:22:20.767630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.400 [2024-12-09 05:22:20.767712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:38.400 [2024-12-09 05:22:20.771269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.400 [2024-12-09 05:22:20.771444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.400 [2024-12-09 05:22:20.771523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:38.400 [2024-12-09 05:22:20.774955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.400 [2024-12-09 05:22:20.775100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.400 [2024-12-09 05:22:20.775176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:38.400 [2024-12-09 05:22:20.778313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.400 [2024-12-09 05:22:20.778678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.400 [2024-12-09 05:22:20.778764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:38.400 [2024-12-09 05:22:20.781994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.400 [2024-12-09 05:22:20.782096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.400 [2024-12-09 05:22:20.782181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:38.400 [2024-12-09 05:22:20.785841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.400 [2024-12-09 05:22:20.785961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.400 [2024-12-09 05:22:20.786039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:38.400 [2024-12-09 05:22:20.789642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.400 [2024-12-09 05:22:20.789762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.400 [2024-12-09 05:22:20.789833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:38.400 [2024-12-09 05:22:20.793518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.400 [2024-12-09 05:22:20.793640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.400 [2024-12-09 05:22:20.793738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:38.400 [2024-12-09 05:22:20.797611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.400 [2024-12-09 05:22:20.797747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.400 [2024-12-09 05:22:20.797836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:38.400 [2024-12-09 05:22:20.801569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.400 [2024-12-09 05:22:20.801723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.400 [2024-12-09 05:22:20.801815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:38.400 [2024-12-09 05:22:20.805487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.400 [2024-12-09 05:22:20.805671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.400 [2024-12-09 05:22:20.805745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:38.400 [2024-12-09 05:22:20.809328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.400 [2024-12-09 05:22:20.809454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.400 [2024-12-09 05:22:20.809557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:38.400 [2024-12-09 05:22:20.813280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.400 [2024-12-09 05:22:20.813485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.400 [2024-12-09 05:22:20.813561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:38.400 [2024-12-09 05:22:20.817228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.400 [2024-12-09 05:22:20.817414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.400 [2024-12-09 05:22:20.817488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:38.400 [2024-12-09 05:22:20.821349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.400 [2024-12-09 05:22:20.821504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.400 [2024-12-09 05:22:20.821577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:38.400 [2024-12-09 05:22:20.825358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.400 [2024-12-09 05:22:20.825515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.400 [2024-12-09 05:22:20.825589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:38.400 [2024-12-09 05:22:20.828835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.400 [2024-12-09 05:22:20.829180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.401 [2024-12-09 05:22:20.829238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:38.401 [2024-12-09 05:22:20.832494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.401 [2024-12-09 05:22:20.832544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.401 [2024-12-09 05:22:20.832577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:38.401 [2024-12-09 05:22:20.836155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.401 [2024-12-09 05:22:20.836253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.401 [2024-12-09 05:22:20.836368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:38.401 [2024-12-09 05:22:20.839938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.401 [2024-12-09 05:22:20.840054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.401 [2024-12-09 05:22:20.840110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:38.401 [2024-12-09 05:22:20.843728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.401 [2024-12-09 05:22:20.843858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.401 [2024-12-09 05:22:20.843933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:38.401 [2024-12-09 05:22:20.847517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.401 [2024-12-09 05:22:20.847615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.401 [2024-12-09 05:22:20.847706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:38.662 [2024-12-09 05:22:20.851252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.662 [2024-12-09 05:22:20.851443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.662 [2024-12-09 05:22:20.851519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:38.662 [2024-12-09 05:22:20.854952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.662 [2024-12-09 05:22:20.855079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.662 [2024-12-09 05:22:20.855159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:38.662 [2024-12-09 05:22:20.858778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.662 [2024-12-09 05:22:20.858959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.662 [2024-12-09 05:22:20.859033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:38.662 [2024-12-09 05:22:20.862632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.662 [2024-12-09 05:22:20.862783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.662 [2024-12-09 05:22:20.862882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:38.662 [2024-12-09 05:22:20.865950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.662 [2024-12-09 05:22:20.866254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.662 [2024-12-09 05:22:20.866366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:38.662 [2024-12-09 05:22:20.869713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.662 [2024-12-09 05:22:20.869806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.662 [2024-12-09 05:22:20.869894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:38.662 [2024-12-09 05:22:20.873480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.662 [2024-12-09 05:22:20.873576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.662 [2024-12-09 05:22:20.873647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:38.662 [2024-12-09 05:22:20.877206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.662 [2024-12-09 05:22:20.877303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.662 [2024-12-09 05:22:20.877426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:38.662 [2024-12-09 05:22:20.881087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.662 [2024-12-09 05:22:20.881179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.662 [2024-12-09 05:22:20.881195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:38.662 [2024-12-09 05:22:20.884905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.662 [2024-12-09 05:22:20.885016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.662 [2024-12-09 05:22:20.885032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:38.662 [2024-12-09 05:22:20.888734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.662 [2024-12-09 05:22:20.888859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.662 [2024-12-09 05:22:20.888875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:38.662 [2024-12-09 05:22:20.892487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.662 [2024-12-09 05:22:20.892564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.662 [2024-12-09 05:22:20.892580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:38.662 [2024-12-09 05:22:20.896108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.662 [2024-12-09 05:22:20.896296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.662 [2024-12-09 05:22:20.896312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:38.662 [2024-12-09 05:22:20.900076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.662 [2024-12-09 05:22:20.900222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.662 [2024-12-09 05:22:20.900239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:38.662 [2024-12-09 05:22:20.904092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.662 [2024-12-09 05:22:20.904247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.662 [2024-12-09 05:22:20.904332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:38.662 [2024-12-09 05:22:20.908074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.663 [2024-12-09 05:22:20.908235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.663 [2024-12-09 05:22:20.908304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:38.663 [2024-12-09 05:22:20.911637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.663 [2024-12-09 05:22:20.912004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.663 [2024-12-09 05:22:20.912103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:38.663 [2024-12-09 05:22:20.915420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.663 [2024-12-09 05:22:20.915534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.663 [2024-12-09 05:22:20.915608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:38.663 [2024-12-09 05:22:20.919226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.663 [2024-12-09 05:22:20.919377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.663 [2024-12-09 05:22:20.919450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:38.663 [2024-12-09 05:22:20.923180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.663 [2024-12-09 05:22:20.923306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.663 [2024-12-09 05:22:20.923402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:38.663 [2024-12-09 05:22:20.927163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.663 [2024-12-09 05:22:20.927352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.663 [2024-12-09 05:22:20.927426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:38.663 [2024-12-09 05:22:20.930893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.663 [2024-12-09 05:22:20.931001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.663 [2024-12-09 05:22:20.931077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:38.663 [2024-12-09 05:22:20.934618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.663 [2024-12-09 05:22:20.934728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.663 [2024-12-09 05:22:20.934799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:38.663 [2024-12-09 05:22:20.938364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.663 [2024-12-09 05:22:20.938529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.663 [2024-12-09 05:22:20.938605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:38.663 [2024-12-09 05:22:20.942113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.663 [2024-12-09 05:22:20.942212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.663 [2024-12-09 05:22:20.942309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:38.663 [2024-12-09 05:22:20.945892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.663 [2024-12-09 05:22:20.946079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.663 [2024-12-09 05:22:20.946152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:38.663 [2024-12-09 05:22:20.949859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.663 [2024-12-09 05:22:20.950017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.663 [2024-12-09 05:22:20.950089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:38.663 [2024-12-09 05:22:20.953835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.663 [2024-12-09 05:22:20.954011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.663 [2024-12-09 05:22:20.954084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:38.663 [2024-12-09 05:22:20.957882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.663 [2024-12-09 05:22:20.958045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.663 [2024-12-09 05:22:20.958062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:38.663 [2024-12-09 05:22:20.961844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.663 [2024-12-09 05:22:20.961975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.663 [2024-12-09 05:22:20.961991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:38.663 [2024-12-09 05:22:20.965211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.663 [2024-12-09 05:22:20.965564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.663 [2024-12-09 05:22:20.965581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:38.663 [2024-12-09 05:22:20.968843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.663 [2024-12-09 05:22:20.968932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.663 [2024-12-09 05:22:20.968948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:38.663 [2024-12-09 05:22:20.972672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.663 [2024-12-09 05:22:20.972723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.663 [2024-12-09 05:22:20.972739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:38.663 [2024-12-09 05:22:20.976396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.663 [2024-12-09 05:22:20.976462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.663 [2024-12-09 05:22:20.976479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:38.663 [2024-12-09 05:22:20.980111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.663 [2024-12-09 05:22:20.980208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.663 [2024-12-09 05:22:20.980225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:38.663 [2024-12-09 05:22:20.983933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.663 [2024-12-09 05:22:20.984029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.663 [2024-12-09 05:22:20.984046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:38.663 [2024-12-09 05:22:20.987774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.663 [2024-12-09 05:22:20.987843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.663 [2024-12-09 05:22:20.987860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:38.663 [2024-12-09 05:22:20.991730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.663 [2024-12-09 05:22:20.991787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.663 [2024-12-09 05:22:20.991805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:38.663 [2024-12-09 05:22:20.995592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.663 [2024-12-09 05:22:20.995678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.663 [2024-12-09 05:22:20.995695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:38.663 [2024-12-09 05:22:20.999269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.663 [2024-12-09 05:22:20.999416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.663 [2024-12-09 05:22:20.999432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:38.663 [2024-12-09 05:22:21.002647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.663 [2024-12-09 05:22:21.002972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.663 [2024-12-09 05:22:21.002988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:38.663 [2024-12-09 05:22:21.006200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.663 [2024-12-09 05:22:21.006285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.663 [2024-12-09 05:22:21.006301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:38.663 [2024-12-09 05:22:21.009911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.664 [2024-12-09 05:22:21.009998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.664 [2024-12-09 05:22:21.010013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:38.664 [2024-12-09 05:22:21.013813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.664 [2024-12-09 05:22:21.013867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.664 [2024-12-09 05:22:21.013882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:38.664 [2024-12-09 05:22:21.017687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.664 [2024-12-09 05:22:21.017737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.664 [2024-12-09 05:22:21.017753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:38.664 [2024-12-09 05:22:21.021527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.664 [2024-12-09 05:22:21.021580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.664 [2024-12-09 05:22:21.021596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:38.664 [2024-12-09 05:22:21.025388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.664 [2024-12-09 05:22:21.025438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.664 [2024-12-09 05:22:21.025453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:38.664 [2024-12-09 05:22:21.029218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.664 [2024-12-09 05:22:21.029308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.664 [2024-12-09 05:22:21.029324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:38.664 [2024-12-09 05:22:21.033062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.664 [2024-12-09 05:22:21.033174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.664 [2024-12-09 05:22:21.033191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:38.664 [2024-12-09 05:22:21.036876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.664 [2024-12-09 05:22:21.037016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.664 [2024-12-09 05:22:21.037032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:38.664 [2024-12-09 05:22:21.040711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.664 [2024-12-09 05:22:21.040779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.664 [2024-12-09 05:22:21.040796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:38.664 [2024-12-09 05:22:21.044441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.664 [2024-12-09 05:22:21.044530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.664 [2024-12-09 05:22:21.044546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:38.664 [2024-12-09 05:22:21.048238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.664 [2024-12-09 05:22:21.048425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.664 [2024-12-09 05:22:21.048442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:38.664 [2024-12-09 05:22:21.051678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.664 [2024-12-09 05:22:21.052021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.664 [2024-12-09 05:22:21.052053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:38.664 [2024-12-09 05:22:21.055403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.664 [2024-12-09 05:22:21.055457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.664 [2024-12-09 05:22:21.055473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:38.664 [2024-12-09 05:22:21.059098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.664 [2024-12-09 05:22:21.059199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.664 [2024-12-09 05:22:21.059226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:38.664 [2024-12-09 05:22:21.063118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.664 [2024-12-09 05:22:21.063225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.664 [2024-12-09 05:22:21.063242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:38.664 [2024-12-09 05:22:21.066854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.664 [2024-12-09 05:22:21.066939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.664 [2024-12-09 05:22:21.066955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:38.664 [2024-12-09 05:22:21.070637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.664 [2024-12-09 05:22:21.070715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.664 [2024-12-09 05:22:21.070732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:38.664 [2024-12-09 05:22:21.074337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.664 [2024-12-09 05:22:21.074439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.664 [2024-12-09 05:22:21.074456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:38.664 [2024-12-09 05:22:21.078040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.664 [2024-12-09 05:22:21.078152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.664 [2024-12-09 05:22:21.078168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:38.664 [2024-12-09 05:22:21.081996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.664 [2024-12-09 05:22:21.082140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.664 [2024-12-09 05:22:21.082157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:38.664 [2024-12-09 05:22:21.085784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.664 [2024-12-09 05:22:21.085911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.664 [2024-12-09 05:22:21.085927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:38.664 [2024-12-09 05:22:21.089079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.664 [2024-12-09 05:22:21.089413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.664 [2024-12-09 05:22:21.089429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:27:38.664 [2024-12-09 05:22:21.092690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.664 [2024-12-09 05:22:21.092773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.664 [2024-12-09 05:22:21.092788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:27:38.664 [2024-12-09 05:22:21.096469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.664 [2024-12-09 05:22:21.096522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.664 [2024-12-09 05:22:21.096538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:27:38.664 8054.00 IOPS, 1006.75 MiB/s [2024-12-09T05:22:21.120Z] [2024-12-09 05:22:21.101829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cec5b0) with pdu=0x200016eff3c8 01:27:38.664 [2024-12-09 05:22:21.101890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:27:38.664 [2024-12-09 05:22:21.101910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:27:38.664 01:27:38.664 Latency(us) 01:27:38.664 [2024-12-09T05:22:21.120Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:27:38.664 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 01:27:38.664 nvme0n1 : 2.00 8048.23 1006.03 0.00 0.00 1984.35 1259.21 6210.18 01:27:38.664 [2024-12-09T05:22:21.120Z] =================================================================================================================== 01:27:38.664 [2024-12-09T05:22:21.120Z] Total : 8048.23 1006.03 0.00 0.00 1984.35 1259.21 6210.18 01:27:38.664 { 01:27:38.664 "results": [ 01:27:38.664 { 01:27:38.664 "job": "nvme0n1", 01:27:38.664 "core_mask": "0x2", 01:27:38.665 "workload": "randwrite", 01:27:38.665 "status": "finished", 01:27:38.665 "queue_depth": 16, 01:27:38.665 "io_size": 131072, 01:27:38.665 "runtime": 2.00367, 01:27:38.665 "iops": 8048.231495206296, 01:27:38.665 "mibps": 1006.028936900787, 01:27:38.665 "io_failed": 0, 01:27:38.665 "io_timeout": 0, 01:27:38.665 "avg_latency_us": 1984.3476621604864, 01:27:38.665 "min_latency_us": 1259.2069868995634, 01:27:38.665 "max_latency_us": 6210.179912663755 01:27:38.665 } 01:27:38.665 ], 01:27:38.665 "core_count": 1 01:27:38.665 } 01:27:38.924 05:22:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 01:27:38.924 05:22:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 01:27:38.924 05:22:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 01:27:38.924 05:22:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 01:27:38.924 | .driver_specific 01:27:38.924 | .nvme_error 01:27:38.924 | .status_code 01:27:38.924 | .command_transient_transport_error' 01:27:38.924 05:22:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 521 > 0 )) 01:27:38.924 05:22:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80359 01:27:38.924 05:22:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80359 ']' 01:27:38.924 05:22:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80359 01:27:38.924 05:22:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 01:27:38.924 05:22:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:27:38.924 05:22:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80359 01:27:39.183 killing process with pid 80359 01:27:39.183 Received shutdown signal, test time was about 2.000000 seconds 01:27:39.183 01:27:39.183 Latency(us) 01:27:39.183 [2024-12-09T05:22:21.639Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:27:39.183 [2024-12-09T05:22:21.639Z] =================================================================================================================== 01:27:39.183 [2024-12-09T05:22:21.639Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:27:39.183 05:22:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:27:39.183 05:22:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:27:39.183 05:22:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80359' 01:27:39.183 05:22:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80359 01:27:39.183 05:22:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80359 01:27:39.183 05:22:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 80152 01:27:39.183 05:22:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80152 ']' 01:27:39.183 05:22:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80152 01:27:39.183 05:22:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 01:27:39.183 05:22:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:27:39.183 05:22:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80152 01:27:39.442 killing process with pid 80152 01:27:39.442 05:22:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:27:39.442 05:22:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:27:39.442 05:22:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80152' 01:27:39.442 05:22:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80152 01:27:39.442 05:22:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80152 01:27:39.442 ************************************ 01:27:39.442 END TEST nvmf_digest_error 01:27:39.442 ************************************ 01:27:39.442 01:27:39.442 real 0m17.401s 01:27:39.442 user 0m33.166s 01:27:39.442 sys 0m4.492s 01:27:39.442 05:22:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 01:27:39.442 05:22:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:27:39.701 05:22:21 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 01:27:39.701 05:22:21 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 01:27:39.701 05:22:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 01:27:39.701 05:22:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 01:27:39.701 05:22:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:27:39.701 05:22:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 01:27:39.701 05:22:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 01:27:39.701 05:22:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:27:39.701 rmmod nvme_tcp 01:27:39.701 rmmod nvme_fabrics 01:27:39.701 rmmod nvme_keyring 01:27:39.701 05:22:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:27:39.701 05:22:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 01:27:39.701 05:22:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 01:27:39.701 05:22:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 80152 ']' 01:27:39.701 05:22:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 80152 01:27:39.701 05:22:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 80152 ']' 01:27:39.701 05:22:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 80152 01:27:39.701 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (80152) - No such process 01:27:39.701 Process with pid 80152 is not found 01:27:39.701 05:22:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 80152 is not found' 01:27:39.701 05:22:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:27:39.701 05:22:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:27:39.702 05:22:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:27:39.702 05:22:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 01:27:39.702 05:22:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 01:27:39.702 05:22:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:27:39.702 05:22:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 01:27:39.702 05:22:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:27:39.702 05:22:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:27:39.702 05:22:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:27:39.702 05:22:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:27:39.702 05:22:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:27:39.702 05:22:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:27:39.702 05:22:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:27:39.702 05:22:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:27:39.702 05:22:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:27:39.702 05:22:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:27:39.702 05:22:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:27:39.961 05:22:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:27:39.961 05:22:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:27:39.961 05:22:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:27:39.961 05:22:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:27:39.961 05:22:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 01:27:39.961 05:22:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:27:39.961 05:22:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:27:39.961 05:22:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:27:39.961 05:22:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 01:27:39.961 01:27:39.961 real 0m37.083s 01:27:39.961 user 1m7.920s 01:27:39.961 sys 0m10.480s 01:27:39.961 05:22:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 01:27:39.961 ************************************ 01:27:39.961 END TEST nvmf_digest 01:27:39.961 ************************************ 01:27:39.961 05:22:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 01:27:39.961 05:22:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 01:27:39.961 05:22:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 01:27:39.961 05:22:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 01:27:39.961 05:22:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:27:39.961 05:22:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 01:27:39.961 05:22:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 01:27:39.961 ************************************ 01:27:39.961 START TEST nvmf_host_multipath 01:27:39.961 ************************************ 01:27:39.961 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 01:27:40.220 * Looking for test storage... 01:27:40.220 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:27:40.220 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:27:40.220 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # lcov --version 01:27:40.220 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:27:40.220 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:27:40.220 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:27:40.220 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 01:27:40.220 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 01:27:40.220 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 01:27:40.220 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 01:27:40.220 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 01:27:40.220 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 01:27:40.220 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 01:27:40.220 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 01:27:40.220 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 01:27:40.220 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:27:40.220 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 01:27:40.220 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 01:27:40.220 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 01:27:40.220 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:27:40.221 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 01:27:40.221 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 01:27:40.221 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:27:40.221 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 01:27:40.221 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 01:27:40.221 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 01:27:40.221 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 01:27:40.221 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:27:40.221 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 01:27:40.221 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 01:27:40.221 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:27:40.221 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:27:40.221 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 01:27:40.221 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:27:40.221 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:27:40.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:27:40.221 --rc genhtml_branch_coverage=1 01:27:40.221 --rc genhtml_function_coverage=1 01:27:40.221 --rc genhtml_legend=1 01:27:40.221 --rc geninfo_all_blocks=1 01:27:40.221 --rc geninfo_unexecuted_blocks=1 01:27:40.221 01:27:40.221 ' 01:27:40.221 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:27:40.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:27:40.221 --rc genhtml_branch_coverage=1 01:27:40.221 --rc genhtml_function_coverage=1 01:27:40.221 --rc genhtml_legend=1 01:27:40.221 --rc geninfo_all_blocks=1 01:27:40.221 --rc geninfo_unexecuted_blocks=1 01:27:40.221 01:27:40.221 ' 01:27:40.221 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:27:40.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:27:40.221 --rc genhtml_branch_coverage=1 01:27:40.221 --rc genhtml_function_coverage=1 01:27:40.221 --rc genhtml_legend=1 01:27:40.221 --rc geninfo_all_blocks=1 01:27:40.221 --rc geninfo_unexecuted_blocks=1 01:27:40.221 01:27:40.221 ' 01:27:40.221 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:27:40.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:27:40.221 --rc genhtml_branch_coverage=1 01:27:40.221 --rc genhtml_function_coverage=1 01:27:40.221 --rc genhtml_legend=1 01:27:40.221 --rc geninfo_all_blocks=1 01:27:40.221 --rc geninfo_unexecuted_blocks=1 01:27:40.221 01:27:40.221 ' 01:27:40.221 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:27:40.221 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 01:27:40.221 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:27:40.221 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:27:40.221 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:27:40.221 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:27:40.221 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:27:40.221 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:27:40.221 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:27:40.221 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:27:40.221 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:27:40.221 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:27:40.221 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:27:40.221 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:27:40.221 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:27:40.221 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:27:40.221 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:27:40.221 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:27:40.221 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:27:40.221 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 01:27:40.221 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:27:40.221 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:27:40.221 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:27:40.221 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:27:40.221 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:27:40.221 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:27:40.221 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 01:27:40.221 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:27:40.221 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 01:27:40.221 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:27:40.221 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:27:40.221 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:27:40.221 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:27:40.221 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:27:40.221 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:27:40.221 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:27:40.221 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:27:40.221 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:27:40.221 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 01:27:40.221 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 01:27:40.221 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:27:40.221 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:27:40.221 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 01:27:40.221 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:27:40.221 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 01:27:40.221 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 01:27:40.221 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:27:40.221 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:27:40.221 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 01:27:40.221 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 01:27:40.221 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 01:27:40.221 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:27:40.221 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:27:40.221 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:27:40.221 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:27:40.221 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:27:40.221 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:27:40.221 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:27:40.221 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:27:40.221 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 01:27:40.222 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:27:40.222 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:27:40.222 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:27:40.222 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:27:40.222 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:27:40.222 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:27:40.222 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:27:40.222 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:27:40.222 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:27:40.222 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:27:40.222 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:27:40.222 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:27:40.222 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:27:40.222 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:27:40.222 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:27:40.222 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:27:40.222 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:27:40.480 Cannot find device "nvmf_init_br" 01:27:40.480 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 01:27:40.480 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:27:40.480 Cannot find device "nvmf_init_br2" 01:27:40.480 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 01:27:40.480 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:27:40.480 Cannot find device "nvmf_tgt_br" 01:27:40.481 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 01:27:40.481 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:27:40.481 Cannot find device "nvmf_tgt_br2" 01:27:40.481 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 01:27:40.481 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:27:40.481 Cannot find device "nvmf_init_br" 01:27:40.481 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 01:27:40.481 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:27:40.481 Cannot find device "nvmf_init_br2" 01:27:40.481 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 01:27:40.481 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:27:40.481 Cannot find device "nvmf_tgt_br" 01:27:40.481 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 01:27:40.481 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:27:40.481 Cannot find device "nvmf_tgt_br2" 01:27:40.481 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 01:27:40.481 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:27:40.481 Cannot find device "nvmf_br" 01:27:40.481 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 01:27:40.481 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:27:40.481 Cannot find device "nvmf_init_if" 01:27:40.481 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 01:27:40.481 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:27:40.481 Cannot find device "nvmf_init_if2" 01:27:40.481 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 01:27:40.481 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:27:40.481 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:27:40.481 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 01:27:40.481 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:27:40.481 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:27:40.481 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 01:27:40.481 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:27:40.481 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:27:40.481 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:27:40.481 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:27:40.481 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:27:40.481 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:27:40.481 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:27:40.481 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:27:40.481 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:27:40.481 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:27:40.481 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:27:40.742 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:27:40.742 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:27:40.743 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:27:40.743 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:27:40.743 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:27:40.743 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:27:40.743 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:27:40.743 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:27:40.743 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:27:40.743 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:27:40.743 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:27:40.743 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:27:40.743 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:27:40.743 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:27:40.743 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:27:40.743 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:27:40.743 05:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:27:40.743 05:22:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:27:40.743 05:22:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:27:40.743 05:22:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:27:40.743 05:22:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:27:40.743 05:22:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:27:40.743 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:27:40.743 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.114 ms 01:27:40.743 01:27:40.743 --- 10.0.0.3 ping statistics --- 01:27:40.743 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:27:40.743 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 01:27:40.743 05:22:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:27:40.743 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:27:40.743 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms 01:27:40.743 01:27:40.743 --- 10.0.0.4 ping statistics --- 01:27:40.743 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:27:40.743 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 01:27:40.743 05:22:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:27:40.743 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:27:40.743 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.012 ms 01:27:40.743 01:27:40.743 --- 10.0.0.1 ping statistics --- 01:27:40.743 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:27:40.743 rtt min/avg/max/mdev = 0.012/0.012/0.012/0.000 ms 01:27:40.743 05:22:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:27:40.743 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:27:40.743 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.104 ms 01:27:40.743 01:27:40.743 --- 10.0.0.2 ping statistics --- 01:27:40.743 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:27:40.743 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 01:27:40.743 05:22:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:27:40.743 05:22:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@461 -- # return 0 01:27:40.743 05:22:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:27:40.743 05:22:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:27:40.743 05:22:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:27:40.743 05:22:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:27:40.743 05:22:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:27:40.743 05:22:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:27:40.743 05:22:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:27:40.743 05:22:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 01:27:40.743 05:22:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:27:40.743 05:22:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 01:27:40.743 05:22:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 01:27:40.743 05:22:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # nvmfpid=80676 01:27:40.743 05:22:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 01:27:40.743 05:22:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # waitforlisten 80676 01:27:40.743 05:22:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 80676 ']' 01:27:40.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:27:40.743 05:22:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:27:40.743 05:22:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 01:27:40.743 05:22:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:27:40.743 05:22:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 01:27:40.743 05:22:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 01:27:40.743 [2024-12-09 05:22:23.127386] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:27:40.743 [2024-12-09 05:22:23.127448] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:27:41.007 [2024-12-09 05:22:23.277808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 01:27:41.007 [2024-12-09 05:22:23.323650] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:27:41.007 [2024-12-09 05:22:23.323698] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:27:41.007 [2024-12-09 05:22:23.323704] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:27:41.007 [2024-12-09 05:22:23.323709] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:27:41.007 [2024-12-09 05:22:23.323713] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:27:41.007 [2024-12-09 05:22:23.324491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:27:41.007 [2024-12-09 05:22:23.324486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:27:41.007 [2024-12-09 05:22:23.364659] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:27:41.575 05:22:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:27:41.575 05:22:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 01:27:41.575 05:22:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:27:41.575 05:22:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 01:27:41.575 05:22:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 01:27:41.575 05:22:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:27:41.575 05:22:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=80676 01:27:41.575 05:22:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 01:27:41.833 [2024-12-09 05:22:24.213692] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:27:41.833 05:22:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 01:27:42.091 Malloc0 01:27:42.091 05:22:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 01:27:42.350 05:22:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:27:42.609 05:22:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:27:42.872 [2024-12-09 05:22:25.078389] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:27:42.872 05:22:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 01:27:42.872 [2024-12-09 05:22:25.286115] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 01:27:42.872 05:22:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=80732 01:27:42.872 05:22:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 01:27:42.872 05:22:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:27:42.872 05:22:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 80732 /var/tmp/bdevperf.sock 01:27:42.872 05:22:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 80732 ']' 01:27:42.872 05:22:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:27:42.872 05:22:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 01:27:42.872 05:22:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:27:42.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:27:42.872 05:22:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 01:27:42.872 05:22:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 01:27:43.807 05:22:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:27:43.807 05:22:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 01:27:43.807 05:22:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 01:27:44.064 05:22:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 01:27:44.322 Nvme0n1 01:27:44.322 05:22:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 01:27:44.581 Nvme0n1 01:27:44.581 05:22:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 01:27:44.581 05:22:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 01:27:45.967 05:22:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 01:27:45.967 05:22:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 01:27:45.967 05:22:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 01:27:45.967 05:22:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 01:27:45.967 05:22:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80777 01:27:45.967 05:22:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80676 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 01:27:45.967 05:22:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 01:27:52.536 05:22:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 01:27:52.536 05:22:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 01:27:52.536 05:22:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 01:27:52.536 05:22:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:27:52.536 Attaching 4 probes... 01:27:52.536 @path[10.0.0.3, 4421]: 17574 01:27:52.536 @path[10.0.0.3, 4421]: 16571 01:27:52.536 @path[10.0.0.3, 4421]: 15364 01:27:52.536 @path[10.0.0.3, 4421]: 15385 01:27:52.536 @path[10.0.0.3, 4421]: 15376 01:27:52.536 05:22:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 01:27:52.536 05:22:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 01:27:52.536 05:22:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 01:27:52.536 05:22:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 01:27:52.536 05:22:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 01:27:52.536 05:22:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 01:27:52.536 05:22:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80777 01:27:52.536 05:22:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:27:52.536 05:22:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 01:27:52.536 05:22:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 01:27:52.536 05:22:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 01:27:52.794 05:22:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 01:27:52.794 05:22:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80890 01:27:52.794 05:22:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80676 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 01:27:52.794 05:22:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 01:27:59.396 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 01:27:59.396 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 01:27:59.396 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 01:27:59.396 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:27:59.396 Attaching 4 probes... 01:27:59.396 @path[10.0.0.3, 4420]: 17705 01:27:59.396 @path[10.0.0.3, 4420]: 18414 01:27:59.396 @path[10.0.0.3, 4420]: 19516 01:27:59.396 @path[10.0.0.3, 4420]: 20737 01:27:59.396 @path[10.0.0.3, 4420]: 20913 01:27:59.396 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 01:27:59.396 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 01:27:59.396 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 01:27:59.396 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 01:27:59.396 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 01:27:59.396 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 01:27:59.396 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80890 01:27:59.396 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:27:59.396 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 01:27:59.396 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 01:27:59.396 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 01:27:59.396 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 01:27:59.396 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81002 01:27:59.396 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 01:27:59.396 05:22:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80676 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 01:28:05.971 05:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 01:28:05.971 05:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 01:28:05.971 05:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 01:28:05.971 05:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:28:05.971 Attaching 4 probes... 01:28:05.971 @path[10.0.0.3, 4421]: 11929 01:28:05.971 @path[10.0.0.3, 4421]: 17640 01:28:05.971 @path[10.0.0.3, 4421]: 17696 01:28:05.971 @path[10.0.0.3, 4421]: 17719 01:28:05.971 @path[10.0.0.3, 4421]: 17865 01:28:05.971 05:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 01:28:05.971 05:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 01:28:05.971 05:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 01:28:05.971 05:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 01:28:05.971 05:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 01:28:05.971 05:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 01:28:05.971 05:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81002 01:28:05.971 05:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:28:05.971 05:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 01:28:05.971 05:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 01:28:05.971 05:22:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 01:28:05.971 05:22:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 01:28:05.971 05:22:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81120 01:28:05.971 05:22:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80676 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 01:28:05.971 05:22:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 01:28:12.552 05:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 01:28:12.552 05:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 01:28:12.552 05:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 01:28:12.552 05:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:28:12.552 Attaching 4 probes... 01:28:12.552 01:28:12.552 01:28:12.552 01:28:12.552 01:28:12.552 01:28:12.552 05:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 01:28:12.552 05:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 01:28:12.552 05:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 01:28:12.552 05:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 01:28:12.552 05:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 01:28:12.552 05:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 01:28:12.552 05:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81120 01:28:12.552 05:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:28:12.552 05:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 01:28:12.552 05:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 01:28:12.552 05:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 01:28:12.552 05:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 01:28:12.552 05:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81227 01:28:12.552 05:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 01:28:12.552 05:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80676 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 01:28:19.141 05:23:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 01:28:19.141 05:23:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 01:28:19.141 05:23:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 01:28:19.141 05:23:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:28:19.141 Attaching 4 probes... 01:28:19.141 @path[10.0.0.3, 4421]: 17296 01:28:19.141 @path[10.0.0.3, 4421]: 17721 01:28:19.141 @path[10.0.0.3, 4421]: 17767 01:28:19.141 @path[10.0.0.3, 4421]: 17592 01:28:19.141 @path[10.0.0.3, 4421]: 17431 01:28:19.141 05:23:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 01:28:19.141 05:23:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 01:28:19.141 05:23:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 01:28:19.141 05:23:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 01:28:19.141 05:23:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 01:28:19.141 05:23:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 01:28:19.141 05:23:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81227 01:28:19.141 05:23:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:28:19.141 05:23:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 01:28:19.141 05:23:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 01:28:20.075 05:23:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 01:28:20.075 05:23:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81356 01:28:20.075 05:23:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80676 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 01:28:20.075 05:23:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 01:28:26.641 05:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 01:28:26.641 05:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 01:28:26.641 05:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 01:28:26.641 05:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:28:26.641 Attaching 4 probes... 01:28:26.641 @path[10.0.0.3, 4420]: 16678 01:28:26.641 @path[10.0.0.3, 4420]: 17447 01:28:26.641 @path[10.0.0.3, 4420]: 17289 01:28:26.641 @path[10.0.0.3, 4420]: 17326 01:28:26.641 @path[10.0.0.3, 4420]: 17218 01:28:26.641 05:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 01:28:26.641 05:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 01:28:26.641 05:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 01:28:26.641 05:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 01:28:26.641 05:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 01:28:26.641 05:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 01:28:26.641 05:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81356 01:28:26.641 05:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:28:26.641 05:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 01:28:26.641 [2024-12-09 05:23:08.856568] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 01:28:26.641 05:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 01:28:26.641 05:23:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 01:28:33.280 05:23:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 01:28:33.281 05:23:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81530 01:28:33.281 05:23:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80676 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 01:28:33.281 05:23:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 01:28:39.881 05:23:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 01:28:39.881 05:23:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 01:28:39.881 05:23:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 01:28:39.881 05:23:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:28:39.881 Attaching 4 probes... 01:28:39.881 @path[10.0.0.3, 4421]: 22675 01:28:39.881 @path[10.0.0.3, 4421]: 22460 01:28:39.881 @path[10.0.0.3, 4421]: 22743 01:28:39.881 @path[10.0.0.3, 4421]: 22558 01:28:39.881 @path[10.0.0.3, 4421]: 22587 01:28:39.881 05:23:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 01:28:39.881 05:23:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 01:28:39.881 05:23:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 01:28:39.881 05:23:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 01:28:39.881 05:23:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 01:28:39.881 05:23:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 01:28:39.881 05:23:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81530 01:28:39.881 05:23:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:28:39.881 05:23:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 80732 01:28:39.881 05:23:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 80732 ']' 01:28:39.881 05:23:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 80732 01:28:39.881 05:23:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 01:28:39.881 05:23:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:28:39.881 05:23:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80732 01:28:39.881 05:23:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_2 01:28:39.881 05:23:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 01:28:39.881 05:23:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80732' 01:28:39.881 killing process with pid 80732 01:28:39.881 05:23:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 80732 01:28:39.881 05:23:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 80732 01:28:39.881 { 01:28:39.881 "results": [ 01:28:39.881 { 01:28:39.881 "job": "Nvme0n1", 01:28:39.881 "core_mask": "0x4", 01:28:39.881 "workload": "verify", 01:28:39.881 "status": "terminated", 01:28:39.881 "verify_range": { 01:28:39.881 "start": 0, 01:28:39.881 "length": 16384 01:28:39.881 }, 01:28:39.881 "queue_depth": 128, 01:28:39.881 "io_size": 4096, 01:28:39.881 "runtime": 54.397786, 01:28:39.881 "iops": 7965.673455901312, 01:28:39.881 "mibps": 31.1159119371145, 01:28:39.881 "io_failed": 0, 01:28:39.881 "io_timeout": 0, 01:28:39.881 "avg_latency_us": 16049.984634371749, 01:28:39.881 "min_latency_us": 568.789519650655, 01:28:39.881 "max_latency_us": 7033243.388646288 01:28:39.881 } 01:28:39.881 ], 01:28:39.881 "core_count": 1 01:28:39.881 } 01:28:39.881 05:23:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 80732 01:28:39.881 05:23:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 01:28:39.881 [2024-12-09 05:22:25.359542] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:28:39.881 [2024-12-09 05:22:25.359635] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80732 ] 01:28:39.881 [2024-12-09 05:22:25.511069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:28:39.881 [2024-12-09 05:22:25.559848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:28:39.881 [2024-12-09 05:22:25.601287] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:28:39.881 Running I/O for 90 seconds... 01:28:39.881 10856.00 IOPS, 42.41 MiB/s [2024-12-09T05:23:22.337Z] 10224.50 IOPS, 39.94 MiB/s [2024-12-09T05:23:22.337Z] 9803.33 IOPS, 38.29 MiB/s [2024-12-09T05:23:22.337Z] 9400.50 IOPS, 36.72 MiB/s [2024-12-09T05:23:22.337Z] 9056.40 IOPS, 35.38 MiB/s [2024-12-09T05:23:22.337Z] 8827.00 IOPS, 34.48 MiB/s [2024-12-09T05:23:22.337Z] 8663.14 IOPS, 33.84 MiB/s [2024-12-09T05:23:22.337Z] [2024-12-09 05:22:35.034427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.881 [2024-12-09 05:22:35.034495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:28:39.881 [2024-12-09 05:22:35.034541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:20016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.881 [2024-12-09 05:22:35.034551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:28:39.881 [2024-12-09 05:22:35.034566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.881 [2024-12-09 05:22:35.034575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:28:39.881 [2024-12-09 05:22:35.034589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:20032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.881 [2024-12-09 05:22:35.034597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:28:39.881 [2024-12-09 05:22:35.034611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:20040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.881 [2024-12-09 05:22:35.034619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:28:39.881 [2024-12-09 05:22:35.034633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:20048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.881 [2024-12-09 05:22:35.034641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:28:39.881 [2024-12-09 05:22:35.034654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:20056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.881 [2024-12-09 05:22:35.034663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 01:28:39.881 [2024-12-09 05:22:35.034677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:20064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.881 [2024-12-09 05:22:35.034686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:28:39.881 [2024-12-09 05:22:35.034828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.881 [2024-12-09 05:22:35.034841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:28:39.881 [2024-12-09 05:22:35.034857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:20080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.881 [2024-12-09 05:22:35.034889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:28:39.881 [2024-12-09 05:22:35.034903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.881 [2024-12-09 05:22:35.034912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:28:39.881 [2024-12-09 05:22:35.034926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:20096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.881 [2024-12-09 05:22:35.034935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:28:39.882 [2024-12-09 05:22:35.034949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:20104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.882 [2024-12-09 05:22:35.034957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:28:39.882 [2024-12-09 05:22:35.034971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:20112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.882 [2024-12-09 05:22:35.034979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:28:39.882 [2024-12-09 05:22:35.034993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:20120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.882 [2024-12-09 05:22:35.035002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:28:39.882 [2024-12-09 05:22:35.035016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.882 [2024-12-09 05:22:35.035024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.882 [2024-12-09 05:22:35.037087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:20136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.882 [2024-12-09 05:22:35.037117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:28:39.882 [2024-12-09 05:22:35.037137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.882 [2024-12-09 05:22:35.037147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:28:39.882 [2024-12-09 05:22:35.037160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:20152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.882 [2024-12-09 05:22:35.037170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 01:28:39.882 [2024-12-09 05:22:35.037183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.882 [2024-12-09 05:22:35.037192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 01:28:39.882 [2024-12-09 05:22:35.037206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:20168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.882 [2024-12-09 05:22:35.037215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 01:28:39.882 [2024-12-09 05:22:35.037228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:20176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.882 [2024-12-09 05:22:35.037237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 01:28:39.882 [2024-12-09 05:22:35.037261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:20184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.882 [2024-12-09 05:22:35.037270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 01:28:39.882 [2024-12-09 05:22:35.037284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:19216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.882 [2024-12-09 05:22:35.037293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 01:28:39.882 [2024-12-09 05:22:35.037307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:19224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.882 [2024-12-09 05:22:35.037316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:28:39.882 [2024-12-09 05:22:35.037340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.882 [2024-12-09 05:22:35.037349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000a p:0 m:0 dnr:0 01:28:39.882 [2024-12-09 05:22:35.037363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.882 [2024-12-09 05:22:35.037372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000b p:0 m:0 dnr:0 01:28:39.882 [2024-12-09 05:22:35.037386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:19248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.882 [2024-12-09 05:22:35.037395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000c p:0 m:0 dnr:0 01:28:39.882 [2024-12-09 05:22:35.037409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:19256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.882 [2024-12-09 05:22:35.037419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:28:39.882 [2024-12-09 05:22:35.037432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:19264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.882 [2024-12-09 05:22:35.037441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:28:39.882 [2024-12-09 05:22:35.037455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:20192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.882 [2024-12-09 05:22:35.037464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000f p:0 m:0 dnr:0 01:28:39.882 [2024-12-09 05:22:35.037479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.882 [2024-12-09 05:22:35.037487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 01:28:39.882 [2024-12-09 05:22:35.037501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:19280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.882 [2024-12-09 05:22:35.037510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 01:28:39.882 [2024-12-09 05:22:35.037524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:19288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.882 [2024-12-09 05:22:35.037533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 01:28:39.882 [2024-12-09 05:22:35.037551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:19296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.882 [2024-12-09 05:22:35.037560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:28:39.882 [2024-12-09 05:22:35.037574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:19304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.882 [2024-12-09 05:22:35.037583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 01:28:39.882 [2024-12-09 05:22:35.037597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:19312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.882 [2024-12-09 05:22:35.037605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 01:28:39.882 [2024-12-09 05:22:35.037619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:19320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.882 [2024-12-09 05:22:35.037628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:28:39.882 [2024-12-09 05:22:35.037641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.882 [2024-12-09 05:22:35.037650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:28:39.882 [2024-12-09 05:22:35.037664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:19336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.882 [2024-12-09 05:22:35.037673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:28:39.882 [2024-12-09 05:22:35.037686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.882 [2024-12-09 05:22:35.037695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:28:39.882 [2024-12-09 05:22:35.037709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:19352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.882 [2024-12-09 05:22:35.037718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:28:39.882 [2024-12-09 05:22:35.037731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:19360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.882 [2024-12-09 05:22:35.037740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:28:39.882 [2024-12-09 05:22:35.037754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:19368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.882 [2024-12-09 05:22:35.037762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:28:39.882 [2024-12-09 05:22:35.037776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:19376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.882 [2024-12-09 05:22:35.037785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:28:39.882 [2024-12-09 05:22:35.037799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.882 [2024-12-09 05:22:35.037808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:28:39.882 [2024-12-09 05:22:35.037822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.882 [2024-12-09 05:22:35.037834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001f p:0 m:0 dnr:0 01:28:39.882 [2024-12-09 05:22:35.037849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.882 [2024-12-09 05:22:35.037857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 01:28:39.882 [2024-12-09 05:22:35.037871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.882 [2024-12-09 05:22:35.037882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:28:39.882 [2024-12-09 05:22:35.037896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:19416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.882 [2024-12-09 05:22:35.037905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:28:39.882 [2024-12-09 05:22:35.037920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.882 [2024-12-09 05:22:35.037928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:28:39.883 [2024-12-09 05:22:35.037943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:19432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.883 [2024-12-09 05:22:35.037951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:28:39.883 [2024-12-09 05:22:35.042709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.883 [2024-12-09 05:22:35.042742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:28:39.883 [2024-12-09 05:22:35.042762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:20208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.883 [2024-12-09 05:22:35.042772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:28:39.883 [2024-12-09 05:22:35.042786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:20216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.883 [2024-12-09 05:22:35.042795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:28:39.883 [2024-12-09 05:22:35.042809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:20224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.883 [2024-12-09 05:22:35.042819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:28:39.883 [2024-12-09 05:22:35.042833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:19440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.883 [2024-12-09 05:22:35.042841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:28:39.883 [2024-12-09 05:22:35.042856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:19448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.883 [2024-12-09 05:22:35.042864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:28:39.883 [2024-12-09 05:22:35.042878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:19456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.883 [2024-12-09 05:22:35.042896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:28:39.883 [2024-12-09 05:22:35.042910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.883 [2024-12-09 05:22:35.042918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:28:39.883 [2024-12-09 05:22:35.042932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.883 [2024-12-09 05:22:35.042941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:28:39.883 [2024-12-09 05:22:35.042955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:19480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.883 [2024-12-09 05:22:35.042964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:28:39.883 [2024-12-09 05:22:35.042978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:19488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.883 [2024-12-09 05:22:35.042987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:28:39.883 [2024-12-09 05:22:35.043001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:20232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.883 [2024-12-09 05:22:35.043009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 01:28:39.883 [2024-12-09 05:22:35.043023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:19496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.883 [2024-12-09 05:22:35.043032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:28:39.883 [2024-12-09 05:22:35.043046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:19504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.883 [2024-12-09 05:22:35.043055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:28:39.883 [2024-12-09 05:22:35.043068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:19512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.883 [2024-12-09 05:22:35.043077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:28:39.883 [2024-12-09 05:22:35.043091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:19520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.883 [2024-12-09 05:22:35.043099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:28:39.883 [2024-12-09 05:22:35.043113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:19528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.883 [2024-12-09 05:22:35.043121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:28:39.883 [2024-12-09 05:22:35.043136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:19536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.883 [2024-12-09 05:22:35.043144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:28:39.883 [2024-12-09 05:22:35.043158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:19544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.883 [2024-12-09 05:22:35.043171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 01:28:39.883 [2024-12-09 05:22:35.043185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:19552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.883 [2024-12-09 05:22:35.043194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 01:28:39.883 [2024-12-09 05:22:35.043209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:19560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.883 [2024-12-09 05:22:35.043217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:28:39.883 [2024-12-09 05:22:35.043238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:19568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.883 [2024-12-09 05:22:35.043247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:28:39.883 [2024-12-09 05:22:35.043260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:19576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.883 [2024-12-09 05:22:35.043269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:28:39.883 [2024-12-09 05:22:35.043283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:19584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.883 [2024-12-09 05:22:35.043292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003c p:0 m:0 dnr:0 01:28:39.883 [2024-12-09 05:22:35.043306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:19592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.883 [2024-12-09 05:22:35.043315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:28:39.883 [2024-12-09 05:22:35.043336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:19600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.883 [2024-12-09 05:22:35.043347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:28:39.883 [2024-12-09 05:22:35.043361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:19608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.883 [2024-12-09 05:22:35.043370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:28:39.883 [2024-12-09 05:22:35.043383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:19616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.883 [2024-12-09 05:22:35.043392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:28:39.883 [2024-12-09 05:22:35.043406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.883 [2024-12-09 05:22:35.043415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:28:39.883 [2024-12-09 05:22:35.043429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:19632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.883 [2024-12-09 05:22:35.043437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:28:39.883 [2024-12-09 05:22:35.043451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:19640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.883 [2024-12-09 05:22:35.043460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:28:39.883 [2024-12-09 05:22:35.043478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:19648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.883 [2024-12-09 05:22:35.043487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:28:39.883 [2024-12-09 05:22:35.043500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:19656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.883 [2024-12-09 05:22:35.043509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:28:39.883 [2024-12-09 05:22:35.043523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:19664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.883 [2024-12-09 05:22:35.043532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:28:39.883 [2024-12-09 05:22:35.043546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:19672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.883 [2024-12-09 05:22:35.043554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:28:39.883 [2024-12-09 05:22:35.043568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:19680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.883 [2024-12-09 05:22:35.043577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:28:39.883 [2024-12-09 05:22:35.043591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:19688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.883 [2024-12-09 05:22:35.043600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:28:39.883 [2024-12-09 05:22:35.043614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:19696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.883 [2024-12-09 05:22:35.043622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:28:39.884 [2024-12-09 05:22:35.043636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:19704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.884 [2024-12-09 05:22:35.043645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:28:39.884 [2024-12-09 05:22:35.043658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:19712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.884 [2024-12-09 05:22:35.043667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004c p:0 m:0 dnr:0 01:28:39.884 [2024-12-09 05:22:35.043681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.884 [2024-12-09 05:22:35.043689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:28:39.884 [2024-12-09 05:22:35.043703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:19728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.884 [2024-12-09 05:22:35.043712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:28:39.884 [2024-12-09 05:22:35.043725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.884 [2024-12-09 05:22:35.043734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:28:39.884 [2024-12-09 05:22:35.043752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:19744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.884 [2024-12-09 05:22:35.043761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:28:39.884 [2024-12-09 05:22:35.043775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.884 [2024-12-09 05:22:35.043783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:28:39.884 [2024-12-09 05:22:35.043796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:19760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.884 [2024-12-09 05:22:35.043805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:28:39.884 [2024-12-09 05:22:35.043818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.884 [2024-12-09 05:22:35.043827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:28:39.884 [2024-12-09 05:22:35.043841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:19776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.884 [2024-12-09 05:22:35.043849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:28:39.884 [2024-12-09 05:22:35.043864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:19784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.884 [2024-12-09 05:22:35.043872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:28:39.884 [2024-12-09 05:22:35.043887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:19792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.884 [2024-12-09 05:22:35.043896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:28:39.884 [2024-12-09 05:22:35.043910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:19800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.884 [2024-12-09 05:22:35.043919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:28:39.884 [2024-12-09 05:22:35.043932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.884 [2024-12-09 05:22:35.043941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:28:39.884 [2024-12-09 05:22:35.043955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.884 [2024-12-09 05:22:35.043964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:28:39.884 [2024-12-09 05:22:35.043978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:19824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.884 [2024-12-09 05:22:35.043986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:28:39.884 [2024-12-09 05:22:35.044000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:19832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.884 [2024-12-09 05:22:35.044008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:28:39.884 [2024-12-09 05:22:35.044022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:19840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.884 [2024-12-09 05:22:35.044034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:28:39.884 [2024-12-09 05:22:35.044048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:19848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.884 [2024-12-09 05:22:35.044057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:28:39.884 [2024-12-09 05:22:35.044070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.884 [2024-12-09 05:22:35.044079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:28:39.884 [2024-12-09 05:22:35.044093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:19864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.884 [2024-12-09 05:22:35.044101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:28:39.884 [2024-12-09 05:22:35.044115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:19872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.884 [2024-12-09 05:22:35.044124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:28:39.884 [2024-12-09 05:22:35.044137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:19880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.884 [2024-12-09 05:22:35.044146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:28:39.884 [2024-12-09 05:22:35.044160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:19888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.884 [2024-12-09 05:22:35.044169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:28:39.884 [2024-12-09 05:22:35.044182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:19896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.884 [2024-12-09 05:22:35.044191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:28:39.884 [2024-12-09 05:22:35.044205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:19904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.884 [2024-12-09 05:22:35.044214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 01:28:39.884 [2024-12-09 05:22:35.044230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:19912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.884 [2024-12-09 05:22:35.044239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:28:39.884 [2024-12-09 05:22:35.044254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:19920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.884 [2024-12-09 05:22:35.044262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:28:39.884 [2024-12-09 05:22:35.044276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:19928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.884 [2024-12-09 05:22:35.044285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:28:39.884 [2024-12-09 05:22:35.044299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:19936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.884 [2024-12-09 05:22:35.044314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:28:39.884 [2024-12-09 05:22:35.044335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.884 [2024-12-09 05:22:35.044345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:28:39.884 [2024-12-09 05:22:35.044359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:19952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.884 [2024-12-09 05:22:35.044367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:28:39.884 [2024-12-09 05:22:35.044381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:19960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.884 [2024-12-09 05:22:35.044389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:28:39.884 [2024-12-09 05:22:35.044403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:19968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.884 [2024-12-09 05:22:35.044411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:28:39.884 [2024-12-09 05:22:35.044425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.884 [2024-12-09 05:22:35.044434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:28:39.884 [2024-12-09 05:22:35.044448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:19984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.884 [2024-12-09 05:22:35.044456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:28:39.884 [2024-12-09 05:22:35.044470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:19992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.884 [2024-12-09 05:22:35.044478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:28:39.884 [2024-12-09 05:22:35.044492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:20000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.884 [2024-12-09 05:22:35.044500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:28:39.884 8529.50 IOPS, 33.32 MiB/s [2024-12-09T05:23:22.340Z] 8565.78 IOPS, 33.46 MiB/s [2024-12-09T05:23:22.340Z] 8610.80 IOPS, 33.64 MiB/s [2024-12-09T05:23:22.340Z] 8682.55 IOPS, 33.92 MiB/s [2024-12-09T05:23:22.340Z] 8810.33 IOPS, 34.42 MiB/s [2024-12-09T05:23:22.341Z] 8927.69 IOPS, 34.87 MiB/s [2024-12-09T05:23:22.341Z] 9034.57 IOPS, 35.29 MiB/s [2024-12-09T05:23:22.341Z] [2024-12-09 05:22:41.486530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:129312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.885 [2024-12-09 05:22:41.486593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:28:39.885 [2024-12-09 05:22:41.486617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:129320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.885 [2024-12-09 05:22:41.486626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:28:39.885 [2024-12-09 05:22:41.486640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:129328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.885 [2024-12-09 05:22:41.486649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:28:39.885 [2024-12-09 05:22:41.486684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:129336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.885 [2024-12-09 05:22:41.486693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:28:39.885 [2024-12-09 05:22:41.486706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:129344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.885 [2024-12-09 05:22:41.486715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:28:39.885 [2024-12-09 05:22:41.486729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:129352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.885 [2024-12-09 05:22:41.486737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:28:39.885 [2024-12-09 05:22:41.486750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:129360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.885 [2024-12-09 05:22:41.486758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:28:39.885 [2024-12-09 05:22:41.486772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:129368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.885 [2024-12-09 05:22:41.486780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:28:39.885 [2024-12-09 05:22:41.486793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:129376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.885 [2024-12-09 05:22:41.486801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:28:39.885 [2024-12-09 05:22:41.486815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:129384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.885 [2024-12-09 05:22:41.486823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:28:39.885 [2024-12-09 05:22:41.486836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:129392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.885 [2024-12-09 05:22:41.486845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:28:39.885 [2024-12-09 05:22:41.486858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:129400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.885 [2024-12-09 05:22:41.486866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:28:39.885 [2024-12-09 05:22:41.486880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:129408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.885 [2024-12-09 05:22:41.486888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:28:39.885 [2024-12-09 05:22:41.486901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:129416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.885 [2024-12-09 05:22:41.486909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 01:28:39.885 [2024-12-09 05:22:41.486922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:129424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.885 [2024-12-09 05:22:41.486930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:28:39.885 [2024-12-09 05:22:41.486944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:129432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.885 [2024-12-09 05:22:41.486958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:28:39.885 [2024-12-09 05:22:41.486972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:128864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.885 [2024-12-09 05:22:41.486980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:28:39.885 [2024-12-09 05:22:41.486995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:128872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.885 [2024-12-09 05:22:41.487004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:28:39.885 [2024-12-09 05:22:41.487017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:128880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.885 [2024-12-09 05:22:41.487026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:28:39.885 [2024-12-09 05:22:41.487039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:128888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.885 [2024-12-09 05:22:41.487048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:28:39.885 [2024-12-09 05:22:41.487061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:128896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.885 [2024-12-09 05:22:41.487070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 01:28:39.885 [2024-12-09 05:22:41.487084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:128904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.885 [2024-12-09 05:22:41.487092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 01:28:39.885 [2024-12-09 05:22:41.487105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:128912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.885 [2024-12-09 05:22:41.487113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:28:39.885 [2024-12-09 05:22:41.487127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:128920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.885 [2024-12-09 05:22:41.487135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:28:39.885 [2024-12-09 05:22:41.487148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:128928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.885 [2024-12-09 05:22:41.487156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:28:39.885 [2024-12-09 05:22:41.487169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:128936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.885 [2024-12-09 05:22:41.487177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003c p:0 m:0 dnr:0 01:28:39.885 [2024-12-09 05:22:41.487191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:128944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.885 [2024-12-09 05:22:41.487199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:28:39.885 [2024-12-09 05:22:41.487213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:128952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.885 [2024-12-09 05:22:41.487232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:28:39.885 [2024-12-09 05:22:41.487246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:128960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.885 [2024-12-09 05:22:41.487254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:28:39.885 [2024-12-09 05:22:41.487268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:128968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.885 [2024-12-09 05:22:41.487277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:28:39.885 [2024-12-09 05:22:41.487290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:128976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.885 [2024-12-09 05:22:41.487299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:28:39.885 [2024-12-09 05:22:41.487312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:128984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.885 [2024-12-09 05:22:41.487320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:28:39.885 [2024-12-09 05:22:41.487349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:129440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.885 [2024-12-09 05:22:41.487358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:28:39.886 [2024-12-09 05:22:41.487372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:129448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.886 [2024-12-09 05:22:41.487380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:28:39.886 [2024-12-09 05:22:41.487394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:129456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.886 [2024-12-09 05:22:41.487402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:28:39.886 [2024-12-09 05:22:41.487416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:129464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.886 [2024-12-09 05:22:41.487424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:28:39.886 [2024-12-09 05:22:41.487438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:129472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.886 [2024-12-09 05:22:41.487446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:28:39.886 [2024-12-09 05:22:41.487460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:129480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.886 [2024-12-09 05:22:41.487469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:28:39.886 [2024-12-09 05:22:41.487482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:129488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.886 [2024-12-09 05:22:41.487490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:28:39.886 [2024-12-09 05:22:41.487504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:129496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.886 [2024-12-09 05:22:41.487517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:28:39.886 [2024-12-09 05:22:41.487531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:128992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.886 [2024-12-09 05:22:41.487539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:28:39.886 [2024-12-09 05:22:41.487553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:129000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.886 [2024-12-09 05:22:41.487562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004c p:0 m:0 dnr:0 01:28:39.886 [2024-12-09 05:22:41.487576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:129008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.886 [2024-12-09 05:22:41.487584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:28:39.886 [2024-12-09 05:22:41.487597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:129016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.886 [2024-12-09 05:22:41.487606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:28:39.886 [2024-12-09 05:22:41.487620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:129024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.886 [2024-12-09 05:22:41.487628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:28:39.886 [2024-12-09 05:22:41.487642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:129032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.886 [2024-12-09 05:22:41.487650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:28:39.886 [2024-12-09 05:22:41.487664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:129040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.886 [2024-12-09 05:22:41.487672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:28:39.886 [2024-12-09 05:22:41.487685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:129048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.886 [2024-12-09 05:22:41.487694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:28:39.886 [2024-12-09 05:22:41.487707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:129056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.886 [2024-12-09 05:22:41.487716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:28:39.886 [2024-12-09 05:22:41.487730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:129064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.886 [2024-12-09 05:22:41.487738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:28:39.886 [2024-12-09 05:22:41.487752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:129072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.886 [2024-12-09 05:22:41.487761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:28:39.886 [2024-12-09 05:22:41.487775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:129080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.886 [2024-12-09 05:22:41.487784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:28:39.886 [2024-12-09 05:22:41.487802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:129088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.886 [2024-12-09 05:22:41.487810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:28:39.886 [2024-12-09 05:22:41.487824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:129096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.886 [2024-12-09 05:22:41.487832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:28:39.886 [2024-12-09 05:22:41.487846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:129104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.886 [2024-12-09 05:22:41.487854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:28:39.886 [2024-12-09 05:22:41.487868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:129112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.886 [2024-12-09 05:22:41.487876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:28:39.886 [2024-12-09 05:22:41.487893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:129504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.886 [2024-12-09 05:22:41.487901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:28:39.886 [2024-12-09 05:22:41.487915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:129512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.886 [2024-12-09 05:22:41.487923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:28:39.886 [2024-12-09 05:22:41.487937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:129520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.886 [2024-12-09 05:22:41.487946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:28:39.886 [2024-12-09 05:22:41.487959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:129528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.886 [2024-12-09 05:22:41.487968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:28:39.886 [2024-12-09 05:22:41.487982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:129536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.886 [2024-12-09 05:22:41.487991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:28:39.886 [2024-12-09 05:22:41.488004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:129544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.886 [2024-12-09 05:22:41.488013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:28:39.886 [2024-12-09 05:22:41.488028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:129552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.886 [2024-12-09 05:22:41.488036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:28:39.886 [2024-12-09 05:22:41.488050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:129560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.886 [2024-12-09 05:22:41.488058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:28:39.886 [2024-12-09 05:22:41.488076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:129568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.886 [2024-12-09 05:22:41.488084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:28:39.886 [2024-12-09 05:22:41.488098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:129576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.886 [2024-12-09 05:22:41.488106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 01:28:39.886 [2024-12-09 05:22:41.488120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:129584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.886 [2024-12-09 05:22:41.488128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:28:39.886 [2024-12-09 05:22:41.488142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:129592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.886 [2024-12-09 05:22:41.488150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:28:39.886 [2024-12-09 05:22:41.488164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:129600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.886 [2024-12-09 05:22:41.488172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:28:39.886 [2024-12-09 05:22:41.488185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:129608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.886 [2024-12-09 05:22:41.488194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:28:39.886 [2024-12-09 05:22:41.488208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:129616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.886 [2024-12-09 05:22:41.488216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:28:39.886 [2024-12-09 05:22:41.488230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:129624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.887 [2024-12-09 05:22:41.488238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:28:39.887 [2024-12-09 05:22:41.488251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:129120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.887 [2024-12-09 05:22:41.488260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:28:39.887 [2024-12-09 05:22:41.488274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:129128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.887 [2024-12-09 05:22:41.488282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:28:39.887 [2024-12-09 05:22:41.488295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:129136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.887 [2024-12-09 05:22:41.488304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:28:39.887 [2024-12-09 05:22:41.488318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:129144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.887 [2024-12-09 05:22:41.488334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:28:39.887 [2024-12-09 05:22:41.488348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:129152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.887 [2024-12-09 05:22:41.488360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:28:39.887 [2024-12-09 05:22:41.488374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:129160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.887 [2024-12-09 05:22:41.488382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:28:39.887 [2024-12-09 05:22:41.488396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:129168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.887 [2024-12-09 05:22:41.488404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:28:39.887 [2024-12-09 05:22:41.488418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:129176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.887 [2024-12-09 05:22:41.488426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:28:39.887 [2024-12-09 05:22:41.488443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:129632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.887 [2024-12-09 05:22:41.488452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:28:39.887 [2024-12-09 05:22:41.488465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:129640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.887 [2024-12-09 05:22:41.488474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:28:39.887 [2024-12-09 05:22:41.488488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:129648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.887 [2024-12-09 05:22:41.488496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:28:39.887 [2024-12-09 05:22:41.488510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:129656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.887 [2024-12-09 05:22:41.488518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:28:39.887 [2024-12-09 05:22:41.488532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:129664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.887 [2024-12-09 05:22:41.488540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 01:28:39.887 [2024-12-09 05:22:41.488554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:129672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.887 [2024-12-09 05:22:41.488563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:28:39.887 [2024-12-09 05:22:41.488577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:129680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.887 [2024-12-09 05:22:41.488585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:28:39.887 [2024-12-09 05:22:41.488599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:129688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.887 [2024-12-09 05:22:41.488607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:28:39.887 [2024-12-09 05:22:41.489290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:129696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.887 [2024-12-09 05:22:41.489317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:28:39.887 [2024-12-09 05:22:41.489348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:129704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.887 [2024-12-09 05:22:41.489357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:28:39.887 [2024-12-09 05:22:41.489371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:129712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.887 [2024-12-09 05:22:41.489380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:28:39.887 [2024-12-09 05:22:41.489393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:129720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.887 [2024-12-09 05:22:41.489401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:28:39.887 [2024-12-09 05:22:41.489416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:129728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.887 [2024-12-09 05:22:41.489424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:28:39.887 [2024-12-09 05:22:41.489438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:129736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.887 [2024-12-09 05:22:41.489446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.887 [2024-12-09 05:22:41.489460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:129744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.887 [2024-12-09 05:22:41.489469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:28:39.887 [2024-12-09 05:22:41.489482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:129752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.887 [2024-12-09 05:22:41.489491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:28:39.887 [2024-12-09 05:22:41.489504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:129184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.887 [2024-12-09 05:22:41.489512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 01:28:39.887 [2024-12-09 05:22:41.489526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:129192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.887 [2024-12-09 05:22:41.489535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 01:28:39.887 [2024-12-09 05:22:41.489548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:129200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.887 [2024-12-09 05:22:41.489557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 01:28:39.887 [2024-12-09 05:22:41.489570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:129208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.887 [2024-12-09 05:22:41.489578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 01:28:39.887 [2024-12-09 05:22:41.489592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:129216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.887 [2024-12-09 05:22:41.489605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 01:28:39.887 [2024-12-09 05:22:41.489619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:129224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.887 [2024-12-09 05:22:41.489627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 01:28:39.887 [2024-12-09 05:22:41.489641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:129232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.887 [2024-12-09 05:22:41.489650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:28:39.887 [2024-12-09 05:22:41.489664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:129240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.887 [2024-12-09 05:22:41.489673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000a p:0 m:0 dnr:0 01:28:39.887 [2024-12-09 05:22:41.489688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:129248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.887 [2024-12-09 05:22:41.489696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000b p:0 m:0 dnr:0 01:28:39.887 [2024-12-09 05:22:41.489710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:129256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.887 [2024-12-09 05:22:41.489718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000c p:0 m:0 dnr:0 01:28:39.887 [2024-12-09 05:22:41.489732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:129264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.887 [2024-12-09 05:22:41.489741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:28:39.887 [2024-12-09 05:22:41.489754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:129272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.887 [2024-12-09 05:22:41.489763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:28:39.887 [2024-12-09 05:22:41.489776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:129280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.887 [2024-12-09 05:22:41.489785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000f p:0 m:0 dnr:0 01:28:39.887 [2024-12-09 05:22:41.489798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:129288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.887 [2024-12-09 05:22:41.489807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 01:28:39.887 [2024-12-09 05:22:41.489821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:129296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.888 [2024-12-09 05:22:41.489829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 01:28:39.888 [2024-12-09 05:22:41.489854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:129304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.888 [2024-12-09 05:22:41.489864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 01:28:39.888 [2024-12-09 05:22:41.489878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:129760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.888 [2024-12-09 05:22:41.489886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:28:39.888 [2024-12-09 05:22:41.489905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:129768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.888 [2024-12-09 05:22:41.489913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 01:28:39.888 [2024-12-09 05:22:41.489927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:129776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.888 [2024-12-09 05:22:41.489936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 01:28:39.888 [2024-12-09 05:22:41.489949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:129784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.888 [2024-12-09 05:22:41.489957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:28:39.888 [2024-12-09 05:22:41.489971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:129792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.888 [2024-12-09 05:22:41.489980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:28:39.888 [2024-12-09 05:22:41.489994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:129800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.888 [2024-12-09 05:22:41.490002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:28:39.888 [2024-12-09 05:22:41.490015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:129808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.888 [2024-12-09 05:22:41.490024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:28:39.888 [2024-12-09 05:22:41.490038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:129816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.888 [2024-12-09 05:22:41.490054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:28:39.888 [2024-12-09 05:22:41.490068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:129824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.888 [2024-12-09 05:22:41.490077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:28:39.888 [2024-12-09 05:22:41.490090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:129832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.888 [2024-12-09 05:22:41.490099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:28:39.888 [2024-12-09 05:22:41.490113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:129840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.888 [2024-12-09 05:22:41.490122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:28:39.888 [2024-12-09 05:22:41.490135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:129848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.888 [2024-12-09 05:22:41.490144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:28:39.888 [2024-12-09 05:22:41.490157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:129856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.888 [2024-12-09 05:22:41.490166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001f p:0 m:0 dnr:0 01:28:39.888 [2024-12-09 05:22:41.490182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:129864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.888 [2024-12-09 05:22:41.490191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 01:28:39.888 [2024-12-09 05:22:41.490208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:129872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.888 [2024-12-09 05:22:41.490216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:28:39.888 [2024-12-09 05:22:41.490234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:129880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.888 [2024-12-09 05:22:41.490243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:28:39.888 [2024-12-09 05:22:41.490256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:129312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.888 [2024-12-09 05:22:41.490265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:28:39.888 [2024-12-09 05:22:41.490278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:129320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.888 [2024-12-09 05:22:41.490287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:28:39.888 [2024-12-09 05:22:41.490300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:129328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.888 [2024-12-09 05:22:41.490309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:28:39.888 [2024-12-09 05:22:41.490331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:129336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.888 [2024-12-09 05:22:41.490340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:28:39.888 [2024-12-09 05:22:41.490370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:129344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.888 [2024-12-09 05:22:41.490379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:28:39.888 [2024-12-09 05:22:41.490393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:129352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.888 [2024-12-09 05:22:41.490402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:28:39.888 [2024-12-09 05:22:41.490416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:129360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.888 [2024-12-09 05:22:41.490425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:28:39.888 [2024-12-09 05:22:41.490856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:129368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.888 [2024-12-09 05:22:41.490873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:28:39.888 [2024-12-09 05:22:41.490891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:129376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.888 [2024-12-09 05:22:41.490900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:28:39.888 [2024-12-09 05:22:41.490915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:129384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.888 [2024-12-09 05:22:41.490931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:28:39.888 [2024-12-09 05:22:41.490945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:129392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.888 [2024-12-09 05:22:41.490955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:28:39.888 [2024-12-09 05:22:41.490969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:129400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.888 [2024-12-09 05:22:41.490979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:28:39.888 [2024-12-09 05:22:41.490993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:129408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.888 [2024-12-09 05:22:41.491002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:28:39.888 [2024-12-09 05:22:41.491016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:129416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.888 [2024-12-09 05:22:41.491025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 01:28:39.888 [2024-12-09 05:22:41.491041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:129424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.888 [2024-12-09 05:22:41.491050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:28:39.888 [2024-12-09 05:22:41.491503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:129432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.888 [2024-12-09 05:22:41.491521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:28:39.888 [2024-12-09 05:22:41.491538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:128864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.888 [2024-12-09 05:22:41.491548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:28:39.888 [2024-12-09 05:22:41.491562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:128872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.888 [2024-12-09 05:22:41.491571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:28:39.888 [2024-12-09 05:22:41.491586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:128880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.888 [2024-12-09 05:22:41.491595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:28:39.888 [2024-12-09 05:22:41.491610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:128888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.888 [2024-12-09 05:22:41.491619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:28:39.888 [2024-12-09 05:22:41.491633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:128896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.888 [2024-12-09 05:22:41.491641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 01:28:39.888 [2024-12-09 05:22:41.491656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:128904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.889 [2024-12-09 05:22:41.491673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 01:28:39.889 [2024-12-09 05:22:41.491688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:128912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.889 [2024-12-09 05:22:41.491697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:28:39.889 [2024-12-09 05:22:41.491711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:128920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.889 [2024-12-09 05:22:41.491722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:28:39.889 [2024-12-09 05:22:41.491737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:128928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.889 [2024-12-09 05:22:41.491746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:28:39.889 [2024-12-09 05:22:41.491760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:128936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.889 [2024-12-09 05:22:41.491769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003c p:0 m:0 dnr:0 01:28:39.889 [2024-12-09 05:22:41.491784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:128944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.889 [2024-12-09 05:22:41.491793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:28:39.889 [2024-12-09 05:22:41.491807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:128952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.889 [2024-12-09 05:22:41.491816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:28:39.889 [2024-12-09 05:22:41.491830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:128960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.889 [2024-12-09 05:22:41.491839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:28:39.889 [2024-12-09 05:22:41.491853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:128968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.889 [2024-12-09 05:22:41.491862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:28:39.889 [2024-12-09 05:22:41.509468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:128976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.889 [2024-12-09 05:22:41.509518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:28:39.889 [2024-12-09 05:22:41.509544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:128984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.889 [2024-12-09 05:22:41.509557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:28:39.889 [2024-12-09 05:22:41.509578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:129440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.889 [2024-12-09 05:22:41.509591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:28:39.889 [2024-12-09 05:22:41.509611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:129448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.889 [2024-12-09 05:22:41.509640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:28:39.889 [2024-12-09 05:22:41.509661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:129456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.889 [2024-12-09 05:22:41.509673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:28:39.889 [2024-12-09 05:22:41.509693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:129464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.889 [2024-12-09 05:22:41.509705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:28:39.889 [2024-12-09 05:22:41.509725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:129472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.889 [2024-12-09 05:22:41.509738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:28:39.889 [2024-12-09 05:22:41.509758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:129480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.889 [2024-12-09 05:22:41.509770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:28:39.889 [2024-12-09 05:22:41.509790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:129488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.889 [2024-12-09 05:22:41.509803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:28:39.889 [2024-12-09 05:22:41.509831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:129496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.889 [2024-12-09 05:22:41.509845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:28:39.889 [2024-12-09 05:22:41.509865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:128992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.889 [2024-12-09 05:22:41.509877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:28:39.889 [2024-12-09 05:22:41.509897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:129000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.889 [2024-12-09 05:22:41.509909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004c p:0 m:0 dnr:0 01:28:39.889 [2024-12-09 05:22:41.509929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:129008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.889 [2024-12-09 05:22:41.509941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:28:39.889 [2024-12-09 05:22:41.509960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:129016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.889 [2024-12-09 05:22:41.509972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:28:39.889 [2024-12-09 05:22:41.509992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:129024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.889 [2024-12-09 05:22:41.510004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:28:39.889 [2024-12-09 05:22:41.510024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:129032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.889 [2024-12-09 05:22:41.510036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:28:39.889 [2024-12-09 05:22:41.510066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:129040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.889 [2024-12-09 05:22:41.510078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:28:39.889 [2024-12-09 05:22:41.510098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:129048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.889 [2024-12-09 05:22:41.510110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:28:39.889 [2024-12-09 05:22:41.510130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:129056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.889 [2024-12-09 05:22:41.510142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:28:39.889 [2024-12-09 05:22:41.510161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:129064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.889 [2024-12-09 05:22:41.510173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:28:39.889 [2024-12-09 05:22:41.510193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:129072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.889 [2024-12-09 05:22:41.510205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:28:39.889 [2024-12-09 05:22:41.510224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:129080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.889 [2024-12-09 05:22:41.510236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:28:39.889 [2024-12-09 05:22:41.510256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:129088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.889 [2024-12-09 05:22:41.510269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:28:39.889 [2024-12-09 05:22:41.510289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:129096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.889 [2024-12-09 05:22:41.510300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:28:39.889 [2024-12-09 05:22:41.510335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:129104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.889 [2024-12-09 05:22:41.510350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:28:39.889 [2024-12-09 05:22:41.510370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:129112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.889 [2024-12-09 05:22:41.510382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:28:39.889 [2024-12-09 05:22:41.510402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:129504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.889 [2024-12-09 05:22:41.510414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:28:39.889 [2024-12-09 05:22:41.510434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:129512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.889 [2024-12-09 05:22:41.510446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:28:39.889 [2024-12-09 05:22:41.510472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:129520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.889 [2024-12-09 05:22:41.510485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:28:39.889 [2024-12-09 05:22:41.510504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:129528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.889 [2024-12-09 05:22:41.510516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:28:39.889 [2024-12-09 05:22:41.510536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:129536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.890 [2024-12-09 05:22:41.510549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:28:39.890 [2024-12-09 05:22:41.510568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:129544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.890 [2024-12-09 05:22:41.510580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:28:39.890 [2024-12-09 05:22:41.510600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:129552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.890 [2024-12-09 05:22:41.510612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:28:39.890 [2024-12-09 05:22:41.510631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:129560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.890 [2024-12-09 05:22:41.510643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:28:39.890 [2024-12-09 05:22:41.510662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:129568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.890 [2024-12-09 05:22:41.510674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:28:39.890 [2024-12-09 05:22:41.510694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:129576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.890 [2024-12-09 05:22:41.510706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 01:28:39.890 [2024-12-09 05:22:41.510725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:129584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.890 [2024-12-09 05:22:41.510737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:28:39.890 [2024-12-09 05:22:41.510756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:129592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.890 [2024-12-09 05:22:41.510768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:28:39.890 [2024-12-09 05:22:41.510788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:129600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.890 [2024-12-09 05:22:41.510800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:28:39.890 [2024-12-09 05:22:41.510819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:129608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.890 [2024-12-09 05:22:41.510831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:28:39.890 [2024-12-09 05:22:41.510851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:129616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.890 [2024-12-09 05:22:41.510869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:28:39.890 [2024-12-09 05:22:41.510889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:129624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.890 [2024-12-09 05:22:41.510901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:28:39.890 [2024-12-09 05:22:41.510920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:129120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.890 [2024-12-09 05:22:41.510932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:28:39.890 [2024-12-09 05:22:41.510952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:129128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.890 [2024-12-09 05:22:41.510963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:28:39.890 [2024-12-09 05:22:41.510983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:129136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.890 [2024-12-09 05:22:41.510995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:28:39.890 [2024-12-09 05:22:41.511015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:129144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.890 [2024-12-09 05:22:41.511028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:28:39.890 [2024-12-09 05:22:41.511047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:129152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.890 [2024-12-09 05:22:41.511059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:28:39.890 [2024-12-09 05:22:41.511079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:129160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.890 [2024-12-09 05:22:41.511091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:28:39.890 [2024-12-09 05:22:41.511110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:129168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.890 [2024-12-09 05:22:41.511122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:28:39.890 [2024-12-09 05:22:41.511142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:129176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.890 [2024-12-09 05:22:41.511154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:28:39.890 [2024-12-09 05:22:41.511173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:129632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.890 [2024-12-09 05:22:41.511185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:28:39.890 [2024-12-09 05:22:41.511204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:129640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.890 [2024-12-09 05:22:41.511217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:28:39.890 [2024-12-09 05:22:41.511251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:129648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.890 [2024-12-09 05:22:41.511269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:28:39.890 [2024-12-09 05:22:41.511289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:129656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.890 [2024-12-09 05:22:41.511302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:28:39.890 [2024-12-09 05:22:41.511333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:129664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.890 [2024-12-09 05:22:41.511347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 01:28:39.890 [2024-12-09 05:22:41.511367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:129672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.890 [2024-12-09 05:22:41.511378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:28:39.890 [2024-12-09 05:22:41.511398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:129680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.890 [2024-12-09 05:22:41.511410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:28:39.890 [2024-12-09 05:22:41.511430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:129688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.890 [2024-12-09 05:22:41.511442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:28:39.890 [2024-12-09 05:22:41.511461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:129696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.890 [2024-12-09 05:22:41.511473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:28:39.890 [2024-12-09 05:22:41.511493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:129704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.890 [2024-12-09 05:22:41.511506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:28:39.890 [2024-12-09 05:22:41.511525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:129712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.890 [2024-12-09 05:22:41.511537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:28:39.890 [2024-12-09 05:22:41.511559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:129720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.890 [2024-12-09 05:22:41.511571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:28:39.890 [2024-12-09 05:22:41.511590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:129728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.890 [2024-12-09 05:22:41.511602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:28:39.890 [2024-12-09 05:22:41.511623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:129736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.890 [2024-12-09 05:22:41.511635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.890 [2024-12-09 05:22:41.511654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:129744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.890 [2024-12-09 05:22:41.511672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:28:39.890 [2024-12-09 05:22:41.511692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:129752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.891 [2024-12-09 05:22:41.511704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:28:39.891 [2024-12-09 05:22:41.511724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:129184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.891 [2024-12-09 05:22:41.511737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 01:28:39.891 [2024-12-09 05:22:41.511757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:129192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.891 [2024-12-09 05:22:41.511769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 01:28:39.891 [2024-12-09 05:22:41.511788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:129200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.891 [2024-12-09 05:22:41.511800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 01:28:39.891 [2024-12-09 05:22:41.511821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:129208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.891 [2024-12-09 05:22:41.511833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 01:28:39.891 [2024-12-09 05:22:41.511852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:129216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.891 [2024-12-09 05:22:41.511864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 01:28:39.891 [2024-12-09 05:22:41.511885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:129224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.891 [2024-12-09 05:22:41.511897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 01:28:39.891 [2024-12-09 05:22:41.511917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:129232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.891 [2024-12-09 05:22:41.511929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:28:39.891 [2024-12-09 05:22:41.511948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:129240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.891 [2024-12-09 05:22:41.511960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000a p:0 m:0 dnr:0 01:28:39.891 [2024-12-09 05:22:41.511979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:129248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.891 [2024-12-09 05:22:41.511991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000b p:0 m:0 dnr:0 01:28:39.891 [2024-12-09 05:22:41.512011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:129256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.891 [2024-12-09 05:22:41.512023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000c p:0 m:0 dnr:0 01:28:39.891 [2024-12-09 05:22:41.512043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:129264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.891 [2024-12-09 05:22:41.512055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:28:39.891 [2024-12-09 05:22:41.512080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:129272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.891 [2024-12-09 05:22:41.512093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:28:39.891 [2024-12-09 05:22:41.512112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:129280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.891 [2024-12-09 05:22:41.512124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000f p:0 m:0 dnr:0 01:28:39.891 [2024-12-09 05:22:41.512144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:129288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.891 [2024-12-09 05:22:41.512155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 01:28:39.891 [2024-12-09 05:22:41.512175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:129296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.891 [2024-12-09 05:22:41.512187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 01:28:39.891 [2024-12-09 05:22:41.512207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:129304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.891 [2024-12-09 05:22:41.512219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 01:28:39.891 [2024-12-09 05:22:41.512239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:129760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.891 [2024-12-09 05:22:41.512251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:28:39.891 [2024-12-09 05:22:41.512270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:129768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.891 [2024-12-09 05:22:41.512282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 01:28:39.891 [2024-12-09 05:22:41.512302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:129776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.891 [2024-12-09 05:22:41.512314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 01:28:39.891 [2024-12-09 05:22:41.512347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:129784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.891 [2024-12-09 05:22:41.512360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:28:39.891 [2024-12-09 05:22:41.512380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:129792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.891 [2024-12-09 05:22:41.512392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:28:39.891 [2024-12-09 05:22:41.512412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:129800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.891 [2024-12-09 05:22:41.512424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:28:39.891 [2024-12-09 05:22:41.512443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:129808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.891 [2024-12-09 05:22:41.512456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:28:39.891 [2024-12-09 05:22:41.512481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:129816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.891 [2024-12-09 05:22:41.512493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:28:39.891 [2024-12-09 05:22:41.512513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:129824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.891 [2024-12-09 05:22:41.512525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:28:39.891 [2024-12-09 05:22:41.512544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:129832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.891 [2024-12-09 05:22:41.512556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:28:39.891 [2024-12-09 05:22:41.512575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:129840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.891 [2024-12-09 05:22:41.512588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:28:39.891 [2024-12-09 05:22:41.512607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:129848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.891 [2024-12-09 05:22:41.512619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:28:39.891 [2024-12-09 05:22:41.512639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:129856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.891 [2024-12-09 05:22:41.512651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001f p:0 m:0 dnr:0 01:28:39.891 [2024-12-09 05:22:41.512670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:129864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.891 [2024-12-09 05:22:41.512682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 01:28:39.891 [2024-12-09 05:22:41.512705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:129872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.891 [2024-12-09 05:22:41.512717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:28:39.891 [2024-12-09 05:22:41.512737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:129880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.891 [2024-12-09 05:22:41.512749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:28:39.891 [2024-12-09 05:22:41.512769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:129312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.891 [2024-12-09 05:22:41.512781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:28:39.891 [2024-12-09 05:22:41.512801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:129320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.891 [2024-12-09 05:22:41.512813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:28:39.891 [2024-12-09 05:22:41.512832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:129328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.891 [2024-12-09 05:22:41.512844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:28:39.891 [2024-12-09 05:22:41.512864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:129336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.891 [2024-12-09 05:22:41.512882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:28:39.891 [2024-12-09 05:22:41.512902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:129344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.891 [2024-12-09 05:22:41.512914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:28:39.891 [2024-12-09 05:22:41.512934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:129352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.891 [2024-12-09 05:22:41.512946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:28:39.891 [2024-12-09 05:22:41.512966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:129360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.892 [2024-12-09 05:22:41.512978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:28:39.892 [2024-12-09 05:22:41.512997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:129368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.892 [2024-12-09 05:22:41.513010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:28:39.892 [2024-12-09 05:22:41.513029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:129376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.892 [2024-12-09 05:22:41.513041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:28:39.892 [2024-12-09 05:22:41.513061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:129384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.892 [2024-12-09 05:22:41.513074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:28:39.892 [2024-12-09 05:22:41.513093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:129392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.892 [2024-12-09 05:22:41.513106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:28:39.892 [2024-12-09 05:22:41.513126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:129400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.892 [2024-12-09 05:22:41.513138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:28:39.892 [2024-12-09 05:22:41.513158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:129408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.892 [2024-12-09 05:22:41.513170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:28:39.892 [2024-12-09 05:22:41.513190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:129416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.892 [2024-12-09 05:22:41.513202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 01:28:39.892 [2024-12-09 05:22:41.515469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:129424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.892 [2024-12-09 05:22:41.515509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:28:39.892 [2024-12-09 05:22:41.515546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:129432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.892 [2024-12-09 05:22:41.515583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:28:39.892 [2024-12-09 05:22:41.515611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:128864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.892 [2024-12-09 05:22:41.515628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:28:39.892 [2024-12-09 05:22:41.515656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:128872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.892 [2024-12-09 05:22:41.515673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:28:39.892 [2024-12-09 05:22:41.515700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:128880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.892 [2024-12-09 05:22:41.515717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:28:39.892 [2024-12-09 05:22:41.515745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:128888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.892 [2024-12-09 05:22:41.515761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:28:39.892 [2024-12-09 05:22:41.515789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:128896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.892 [2024-12-09 05:22:41.515805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 01:28:39.892 [2024-12-09 05:22:41.515833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:128904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.892 [2024-12-09 05:22:41.515849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 01:28:39.892 [2024-12-09 05:22:41.515876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:128912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.892 [2024-12-09 05:22:41.515893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:28:39.892 [2024-12-09 05:22:41.515920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.892 [2024-12-09 05:22:41.515937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:28:39.892 [2024-12-09 05:22:41.515964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:128928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.892 [2024-12-09 05:22:41.515981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:28:39.892 [2024-12-09 05:22:41.516008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:128936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.892 [2024-12-09 05:22:41.516025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003c p:0 m:0 dnr:0 01:28:39.892 [2024-12-09 05:22:41.516052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:128944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.892 [2024-12-09 05:22:41.516069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:28:39.892 [2024-12-09 05:22:41.516097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:128952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.892 [2024-12-09 05:22:41.516122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:28:39.892 [2024-12-09 05:22:41.516150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:128960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.892 [2024-12-09 05:22:41.516167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:28:39.892 [2024-12-09 05:22:41.516195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:128968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.892 [2024-12-09 05:22:41.516211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:28:39.892 [2024-12-09 05:22:41.516239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:128976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.892 [2024-12-09 05:22:41.516256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:28:39.892 [2024-12-09 05:22:41.516284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:128984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.892 [2024-12-09 05:22:41.516300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:28:39.892 [2024-12-09 05:22:41.516343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:129440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.892 [2024-12-09 05:22:41.516362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:28:39.892 [2024-12-09 05:22:41.516390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:129448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.892 [2024-12-09 05:22:41.516407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:28:39.892 [2024-12-09 05:22:41.516434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:129456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.892 [2024-12-09 05:22:41.516451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:28:39.892 [2024-12-09 05:22:41.516477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:129464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.892 [2024-12-09 05:22:41.516494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:28:39.892 [2024-12-09 05:22:41.516522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:129472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.892 [2024-12-09 05:22:41.516539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:28:39.892 [2024-12-09 05:22:41.516571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:129480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.892 [2024-12-09 05:22:41.516588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:28:39.892 [2024-12-09 05:22:41.516615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:129488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.892 [2024-12-09 05:22:41.516632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:28:39.892 [2024-12-09 05:22:41.516659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:129496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.892 [2024-12-09 05:22:41.516676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:28:39.892 [2024-12-09 05:22:41.516711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:128992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.892 [2024-12-09 05:22:41.516728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:28:39.892 [2024-12-09 05:22:41.516755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:129000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.892 [2024-12-09 05:22:41.516772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004c p:0 m:0 dnr:0 01:28:39.892 [2024-12-09 05:22:41.516799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:129008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.892 [2024-12-09 05:22:41.516816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:28:39.892 [2024-12-09 05:22:41.516844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:129016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.892 [2024-12-09 05:22:41.516861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:28:39.892 [2024-12-09 05:22:41.516889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:129024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.892 [2024-12-09 05:22:41.516905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:28:39.892 [2024-12-09 05:22:41.516932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:129032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.893 [2024-12-09 05:22:41.516949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:28:39.893 [2024-12-09 05:22:41.516976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:129040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.893 [2024-12-09 05:22:41.516993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:28:39.893 [2024-12-09 05:22:41.517021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:129048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.893 [2024-12-09 05:22:41.517037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:28:39.893 [2024-12-09 05:22:41.517064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:129056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.893 [2024-12-09 05:22:41.517081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:28:39.893 [2024-12-09 05:22:41.517108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:129064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.893 [2024-12-09 05:22:41.517125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:28:39.893 [2024-12-09 05:22:41.517152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:129072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.893 [2024-12-09 05:22:41.517169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:28:39.893 [2024-12-09 05:22:41.517196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:129080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.893 [2024-12-09 05:22:41.517213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:28:39.893 [2024-12-09 05:22:41.517248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:129088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.893 [2024-12-09 05:22:41.517265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:28:39.893 [2024-12-09 05:22:41.517293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:129096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.893 [2024-12-09 05:22:41.517310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:28:39.893 [2024-12-09 05:22:41.517351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:129104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.893 [2024-12-09 05:22:41.517369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:28:39.893 [2024-12-09 05:22:41.517396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:129112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.893 [2024-12-09 05:22:41.517413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:28:39.893 [2024-12-09 05:22:41.517440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:129504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.893 [2024-12-09 05:22:41.517457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:28:39.893 [2024-12-09 05:22:41.517484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:129512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.893 [2024-12-09 05:22:41.517501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:28:39.893 [2024-12-09 05:22:41.517528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:129520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.893 [2024-12-09 05:22:41.517544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:28:39.893 [2024-12-09 05:22:41.517572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:129528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.893 [2024-12-09 05:22:41.517589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:28:39.893 [2024-12-09 05:22:41.517616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:129536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.893 [2024-12-09 05:22:41.517633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:28:39.893 [2024-12-09 05:22:41.517666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:129544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.893 [2024-12-09 05:22:41.517683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:28:39.893 [2024-12-09 05:22:41.517711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:129552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.893 [2024-12-09 05:22:41.517728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:28:39.893 [2024-12-09 05:22:41.517755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:129560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.893 [2024-12-09 05:22:41.517771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:28:39.893 [2024-12-09 05:22:41.517798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:129568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.893 [2024-12-09 05:22:41.517823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:28:39.893 [2024-12-09 05:22:41.517850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:129576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.893 [2024-12-09 05:22:41.517867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 01:28:39.893 [2024-12-09 05:22:41.517894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:129584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.893 [2024-12-09 05:22:41.517911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:28:39.893 [2024-12-09 05:22:41.517938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:129592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.893 [2024-12-09 05:22:41.517955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:28:39.893 [2024-12-09 05:22:41.517982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:129600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.893 [2024-12-09 05:22:41.517999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:28:39.893 [2024-12-09 05:22:41.518026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:129608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.893 [2024-12-09 05:22:41.518043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:28:39.893 [2024-12-09 05:22:41.518071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:129616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.893 [2024-12-09 05:22:41.518087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:28:39.893 [2024-12-09 05:22:41.518114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:129624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.893 [2024-12-09 05:22:41.518131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:28:39.893 [2024-12-09 05:22:41.518158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:129120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.893 [2024-12-09 05:22:41.518175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:28:39.893 [2024-12-09 05:22:41.518202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:129128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.893 [2024-12-09 05:22:41.518219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:28:39.893 [2024-12-09 05:22:41.518246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:129136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.893 [2024-12-09 05:22:41.518263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:28:39.893 [2024-12-09 05:22:41.518291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:129144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.893 [2024-12-09 05:22:41.518308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:28:39.893 [2024-12-09 05:22:41.518347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:129152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.893 [2024-12-09 05:22:41.518373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:28:39.893 [2024-12-09 05:22:41.518401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:129160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.893 [2024-12-09 05:22:41.518418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:28:39.893 [2024-12-09 05:22:41.518446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:129168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.893 [2024-12-09 05:22:41.518463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:28:39.893 [2024-12-09 05:22:41.518490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:129176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.893 [2024-12-09 05:22:41.518506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:28:39.893 [2024-12-09 05:22:41.518533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:129632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.893 [2024-12-09 05:22:41.518550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:28:39.893 [2024-12-09 05:22:41.518577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:129640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.893 [2024-12-09 05:22:41.518594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:28:39.893 [2024-12-09 05:22:41.518621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:129648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.893 [2024-12-09 05:22:41.518638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:28:39.893 [2024-12-09 05:22:41.518665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:129656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.893 [2024-12-09 05:22:41.518682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:28:39.893 [2024-12-09 05:22:41.518709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:129664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.894 [2024-12-09 05:22:41.518726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 01:28:39.894 [2024-12-09 05:22:41.518753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:129672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.894 [2024-12-09 05:22:41.518770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:28:39.894 [2024-12-09 05:22:41.518797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:129680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.894 [2024-12-09 05:22:41.518814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:28:39.894 [2024-12-09 05:22:41.518841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:129688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.894 [2024-12-09 05:22:41.518858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:28:39.894 [2024-12-09 05:22:41.518885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:129696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.894 [2024-12-09 05:22:41.518913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:28:39.894 [2024-12-09 05:22:41.518942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:129704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.894 [2024-12-09 05:22:41.518959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:28:39.894 [2024-12-09 05:22:41.518986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:129712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.894 [2024-12-09 05:22:41.519003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:28:39.894 [2024-12-09 05:22:41.519030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:129720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.894 [2024-12-09 05:22:41.519047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:28:39.894 [2024-12-09 05:22:41.519075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:129728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.894 [2024-12-09 05:22:41.519092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:28:39.894 [2024-12-09 05:22:41.519123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:129736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.894 [2024-12-09 05:22:41.519141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.894 [2024-12-09 05:22:41.519170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:129744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.894 [2024-12-09 05:22:41.519186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:28:39.894 [2024-12-09 05:22:41.519213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:129752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.894 [2024-12-09 05:22:41.519242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:28:39.894 [2024-12-09 05:22:41.519270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:129184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.894 [2024-12-09 05:22:41.519287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 01:28:39.894 [2024-12-09 05:22:41.519314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:129192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.894 [2024-12-09 05:22:41.519346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 01:28:39.894 [2024-12-09 05:22:41.519374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:129200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.894 [2024-12-09 05:22:41.519391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 01:28:39.894 [2024-12-09 05:22:41.519418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:129208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.894 [2024-12-09 05:22:41.519435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 01:28:39.894 [2024-12-09 05:22:41.519463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:129216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.894 [2024-12-09 05:22:41.519479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 01:28:39.894 [2024-12-09 05:22:41.519519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:129224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.894 [2024-12-09 05:22:41.519536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 01:28:39.894 [2024-12-09 05:22:41.519563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:129232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.894 [2024-12-09 05:22:41.519580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:28:39.894 [2024-12-09 05:22:41.519607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:129240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.894 [2024-12-09 05:22:41.519624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000a p:0 m:0 dnr:0 01:28:39.894 [2024-12-09 05:22:41.519652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:129248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.894 [2024-12-09 05:22:41.519671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000b p:0 m:0 dnr:0 01:28:39.894 [2024-12-09 05:22:41.519698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:129256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.894 [2024-12-09 05:22:41.519715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000c p:0 m:0 dnr:0 01:28:39.894 [2024-12-09 05:22:41.519743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:129264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.894 [2024-12-09 05:22:41.519759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:28:39.894 [2024-12-09 05:22:41.519787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:129272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.894 [2024-12-09 05:22:41.519803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:28:39.894 [2024-12-09 05:22:41.519830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:129280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.894 [2024-12-09 05:22:41.519847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000f p:0 m:0 dnr:0 01:28:39.894 [2024-12-09 05:22:41.519875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:129288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.894 [2024-12-09 05:22:41.519891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 01:28:39.894 [2024-12-09 05:22:41.519919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:129296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.894 [2024-12-09 05:22:41.519935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 01:28:39.894 [2024-12-09 05:22:41.519962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:129304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.894 [2024-12-09 05:22:41.519980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 01:28:39.894 [2024-12-09 05:22:41.520006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:129760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.894 [2024-12-09 05:22:41.520023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:28:39.894 [2024-12-09 05:22:41.520057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:129768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.894 [2024-12-09 05:22:41.520075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 01:28:39.894 [2024-12-09 05:22:41.520102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:129776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.894 [2024-12-09 05:22:41.520119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 01:28:39.894 [2024-12-09 05:22:41.520146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:129784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.894 [2024-12-09 05:22:41.520163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:28:39.894 [2024-12-09 05:22:41.520190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:129792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.894 [2024-12-09 05:22:41.520208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:28:39.894 [2024-12-09 05:22:41.520240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:129800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.894 [2024-12-09 05:22:41.520257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:28:39.894 [2024-12-09 05:22:41.520284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:129808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.894 [2024-12-09 05:22:41.520302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:28:39.894 [2024-12-09 05:22:41.520342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:129816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.895 [2024-12-09 05:22:41.520361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:28:39.895 [2024-12-09 05:22:41.520389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:129824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.895 [2024-12-09 05:22:41.520406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:28:39.895 [2024-12-09 05:22:41.520434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:129832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.895 [2024-12-09 05:22:41.520450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:28:39.895 [2024-12-09 05:22:41.520478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:129840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.895 [2024-12-09 05:22:41.520495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:28:39.895 [2024-12-09 05:22:41.520522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:129848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.895 [2024-12-09 05:22:41.520539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:28:39.895 [2024-12-09 05:22:41.520566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:129856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.895 [2024-12-09 05:22:41.520583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001f p:0 m:0 dnr:0 01:28:39.895 [2024-12-09 05:22:41.520610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:129864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.895 [2024-12-09 05:22:41.520635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 01:28:39.895 [2024-12-09 05:22:41.520663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:129872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.895 [2024-12-09 05:22:41.520680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:28:39.895 [2024-12-09 05:22:41.520707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:129880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.895 [2024-12-09 05:22:41.520724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:28:39.895 [2024-12-09 05:22:41.520752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:129312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.895 [2024-12-09 05:22:41.520768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:28:39.895 [2024-12-09 05:22:41.520795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:129320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.895 [2024-12-09 05:22:41.520812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:28:39.895 [2024-12-09 05:22:41.520839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:129328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.895 [2024-12-09 05:22:41.520856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:28:39.895 [2024-12-09 05:22:41.520883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:129336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.895 [2024-12-09 05:22:41.520900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:28:39.895 [2024-12-09 05:22:41.520927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:129344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.895 [2024-12-09 05:22:41.520944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:28:39.895 [2024-12-09 05:22:41.520972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:129352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.895 [2024-12-09 05:22:41.520989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:28:39.895 [2024-12-09 05:22:41.521016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:129360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.895 [2024-12-09 05:22:41.521033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:28:39.895 [2024-12-09 05:22:41.521060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:129368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.895 [2024-12-09 05:22:41.521077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:28:39.895 [2024-12-09 05:22:41.521105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:129376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.895 [2024-12-09 05:22:41.521121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:28:39.895 [2024-12-09 05:22:41.521149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:129384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.895 [2024-12-09 05:22:41.521172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:28:39.895 [2024-12-09 05:22:41.521200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:129392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.895 [2024-12-09 05:22:41.521217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:28:39.895 [2024-12-09 05:22:41.521244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:129400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.895 [2024-12-09 05:22:41.521261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:28:39.895 [2024-12-09 05:22:41.521288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:129408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.895 [2024-12-09 05:22:41.521305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:28:39.895 [2024-12-09 05:22:41.523631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:129416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.895 [2024-12-09 05:22:41.523677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 01:28:39.895 [2024-12-09 05:22:41.523714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:129424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.895 [2024-12-09 05:22:41.523726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:28:39.895 [2024-12-09 05:22:41.523744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:129432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.895 [2024-12-09 05:22:41.523755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:28:39.895 [2024-12-09 05:22:41.523773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:128864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.895 [2024-12-09 05:22:41.523784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:28:39.895 [2024-12-09 05:22:41.523802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:128872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.895 [2024-12-09 05:22:41.523813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:28:39.895 [2024-12-09 05:22:41.523831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:128880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.895 [2024-12-09 05:22:41.523842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:28:39.895 [2024-12-09 05:22:41.523859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:128888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.895 [2024-12-09 05:22:41.523870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:28:39.895 [2024-12-09 05:22:41.523888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:128896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.895 [2024-12-09 05:22:41.523899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 01:28:39.895 [2024-12-09 05:22:41.523916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:128904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.895 [2024-12-09 05:22:41.523940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 01:28:39.895 [2024-12-09 05:22:41.523958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:128912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.895 [2024-12-09 05:22:41.523969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:28:39.895 [2024-12-09 05:22:41.523987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:128920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.895 [2024-12-09 05:22:41.523998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:28:39.895 [2024-12-09 05:22:41.524016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:128928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.895 [2024-12-09 05:22:41.524027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:28:39.895 [2024-12-09 05:22:41.524045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:128936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.895 [2024-12-09 05:22:41.524056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003c p:0 m:0 dnr:0 01:28:39.895 [2024-12-09 05:22:41.524074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:128944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.895 [2024-12-09 05:22:41.524084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:28:39.895 [2024-12-09 05:22:41.524102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:128952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.895 [2024-12-09 05:22:41.524113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:28:39.895 [2024-12-09 05:22:41.524131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:128960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.895 [2024-12-09 05:22:41.524142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:28:39.895 [2024-12-09 05:22:41.524159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:128968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.895 [2024-12-09 05:22:41.524170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:28:39.895 [2024-12-09 05:22:41.524188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:128976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.896 [2024-12-09 05:22:41.524199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:28:39.896 [2024-12-09 05:22:41.524216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:128984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.896 [2024-12-09 05:22:41.524227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:28:39.896 [2024-12-09 05:22:41.524245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:129440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.896 [2024-12-09 05:22:41.524255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:28:39.896 [2024-12-09 05:22:41.524273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:129448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.896 [2024-12-09 05:22:41.524284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:28:39.896 [2024-12-09 05:22:41.524306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:129456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.896 [2024-12-09 05:22:41.524318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:28:39.896 [2024-12-09 05:22:41.524347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:129464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.896 [2024-12-09 05:22:41.524359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:28:39.896 [2024-12-09 05:22:41.524538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:129472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.896 [2024-12-09 05:22:41.524553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:28:39.896 [2024-12-09 05:22:41.524573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:129480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.896 [2024-12-09 05:22:41.524585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:28:39.896 [2024-12-09 05:22:41.524604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:129488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.896 [2024-12-09 05:22:41.524615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:28:39.896 [2024-12-09 05:22:41.524634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:129496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.896 [2024-12-09 05:22:41.524645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:28:39.896 [2024-12-09 05:22:41.524665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:128992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.896 [2024-12-09 05:22:41.524676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:28:39.896 [2024-12-09 05:22:41.524695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:129000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.896 [2024-12-09 05:22:41.524706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004c p:0 m:0 dnr:0 01:28:39.896 [2024-12-09 05:22:41.524725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:129008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.896 [2024-12-09 05:22:41.524736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:28:39.896 [2024-12-09 05:22:41.524755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:129016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.896 [2024-12-09 05:22:41.524766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:28:39.896 [2024-12-09 05:22:41.524785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:129024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.896 [2024-12-09 05:22:41.524796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:28:39.896 [2024-12-09 05:22:41.524815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:129032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.896 [2024-12-09 05:22:41.524826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:28:39.896 [2024-12-09 05:22:41.524853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:129040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.896 [2024-12-09 05:22:41.524864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:28:39.896 [2024-12-09 05:22:41.524884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:129048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.896 [2024-12-09 05:22:41.524894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:28:39.896 [2024-12-09 05:22:41.524913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:129056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.896 [2024-12-09 05:22:41.524924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:28:39.896 [2024-12-09 05:22:41.524943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:129064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.896 [2024-12-09 05:22:41.524954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:28:39.896 [2024-12-09 05:22:41.524973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:129072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.896 [2024-12-09 05:22:41.524983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:28:39.896 [2024-12-09 05:22:41.525002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:129080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.896 [2024-12-09 05:22:41.525013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:28:39.896 [2024-12-09 05:22:41.525032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:129088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.896 [2024-12-09 05:22:41.525043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:28:39.896 [2024-12-09 05:22:41.525062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:129096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.896 [2024-12-09 05:22:41.525074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:28:39.896 [2024-12-09 05:22:41.525093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:129104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.896 [2024-12-09 05:22:41.525104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:28:39.896 [2024-12-09 05:22:41.525123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:129112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.896 [2024-12-09 05:22:41.525134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:28:39.896 [2024-12-09 05:22:41.525154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:129504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.896 [2024-12-09 05:22:41.525165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:28:39.896 [2024-12-09 05:22:41.525184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:129512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.896 [2024-12-09 05:22:41.525195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:28:39.896 [2024-12-09 05:22:41.525214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:129520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.896 [2024-12-09 05:22:41.525230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:28:39.896 [2024-12-09 05:22:41.525249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:129528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.896 [2024-12-09 05:22:41.525261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:28:39.896 [2024-12-09 05:22:41.525516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:129536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.896 [2024-12-09 05:22:41.525531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:28:39.896 [2024-12-09 05:22:41.525553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:129544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.896 [2024-12-09 05:22:41.525564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:28:39.896 [2024-12-09 05:22:41.525584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:129552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.896 [2024-12-09 05:22:41.525596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:28:39.896 [2024-12-09 05:22:41.525616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:129560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.896 [2024-12-09 05:22:41.525627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:28:39.896 [2024-12-09 05:22:41.525648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:129568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.896 [2024-12-09 05:22:41.525659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:28:39.896 [2024-12-09 05:22:41.525679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:129576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.896 [2024-12-09 05:22:41.525690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 01:28:39.896 [2024-12-09 05:22:41.525711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:129584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.896 [2024-12-09 05:22:41.525722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:28:39.896 [2024-12-09 05:22:41.525742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:129592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.896 [2024-12-09 05:22:41.525753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:28:39.896 [2024-12-09 05:22:41.525773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:129600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.896 [2024-12-09 05:22:41.525785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:28:39.897 [2024-12-09 05:22:41.525805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:129608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.897 [2024-12-09 05:22:41.525816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:28:39.897 [2024-12-09 05:22:41.525837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:129616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.897 [2024-12-09 05:22:41.525855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:28:39.897 [2024-12-09 05:22:41.525876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:129624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.897 [2024-12-09 05:22:41.525887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:28:39.897 [2024-12-09 05:22:41.525908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:129120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.897 [2024-12-09 05:22:41.525919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:28:39.897 [2024-12-09 05:22:41.525939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:129128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.897 [2024-12-09 05:22:41.525950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:28:39.897 [2024-12-09 05:22:41.525971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:129136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.897 [2024-12-09 05:22:41.525982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:28:39.897 [2024-12-09 05:22:41.526002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:129144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.897 [2024-12-09 05:22:41.526013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:28:39.897 [2024-12-09 05:22:41.526034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:129152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.897 [2024-12-09 05:22:41.526044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:28:39.897 [2024-12-09 05:22:41.526065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:129160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.897 [2024-12-09 05:22:41.526076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:28:39.897 [2024-12-09 05:22:41.526097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:129168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.897 [2024-12-09 05:22:41.526108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:28:39.897 [2024-12-09 05:22:41.526128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:129176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.897 [2024-12-09 05:22:41.526139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:28:39.897 [2024-12-09 05:22:41.526160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:129632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.897 [2024-12-09 05:22:41.526171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:28:39.897 [2024-12-09 05:22:41.526191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:129640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.897 [2024-12-09 05:22:41.526203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:28:39.897 [2024-12-09 05:22:41.526223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:129648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.897 [2024-12-09 05:22:41.526239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:28:39.897 [2024-12-09 05:22:41.526260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:129656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.897 [2024-12-09 05:22:41.526271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:28:39.897 [2024-12-09 05:22:41.526294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:129664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.897 [2024-12-09 05:22:41.526307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 01:28:39.897 [2024-12-09 05:22:41.526337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:129672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.897 [2024-12-09 05:22:41.526349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:28:39.897 [2024-12-09 05:22:41.526370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:129680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.897 [2024-12-09 05:22:41.526381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:28:39.897 [2024-12-09 05:22:41.526402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:129688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.897 [2024-12-09 05:22:41.526414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:28:39.897 [2024-12-09 05:22:41.526435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:129696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.897 [2024-12-09 05:22:41.526446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:28:39.897 [2024-12-09 05:22:41.526467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:129704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.897 [2024-12-09 05:22:41.526478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:28:39.897 [2024-12-09 05:22:41.526498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:129712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.897 [2024-12-09 05:22:41.526510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:28:39.897 [2024-12-09 05:22:41.526530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:129720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.897 [2024-12-09 05:22:41.526541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:28:39.897 [2024-12-09 05:22:41.526565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:129728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.897 [2024-12-09 05:22:41.526576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:28:39.897 [2024-12-09 05:22:41.526597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:129736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.897 [2024-12-09 05:22:41.526608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.897 [2024-12-09 05:22:41.526628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:129744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.897 [2024-12-09 05:22:41.526639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:28:39.897 [2024-12-09 05:22:41.526665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:129752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.897 [2024-12-09 05:22:41.526677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:28:39.897 [2024-12-09 05:22:41.526698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:129184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.897 [2024-12-09 05:22:41.526709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 01:28:39.897 [2024-12-09 05:22:41.526729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:129192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.897 [2024-12-09 05:22:41.526740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 01:28:39.897 [2024-12-09 05:22:41.526761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:129200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.897 [2024-12-09 05:22:41.526772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 01:28:39.897 [2024-12-09 05:22:41.526793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:129208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.897 [2024-12-09 05:22:41.526804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 01:28:39.897 [2024-12-09 05:22:41.526824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:129216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.897 [2024-12-09 05:22:41.526836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 01:28:39.897 [2024-12-09 05:22:41.526856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:129224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.897 [2024-12-09 05:22:41.526867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 01:28:39.897 [2024-12-09 05:22:41.526887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:129232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.897 [2024-12-09 05:22:41.526898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:28:39.897 [2024-12-09 05:22:41.526919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:129240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.897 [2024-12-09 05:22:41.526930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000a p:0 m:0 dnr:0 01:28:39.897 [2024-12-09 05:22:41.526951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:129248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.897 [2024-12-09 05:22:41.526962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000b p:0 m:0 dnr:0 01:28:39.897 [2024-12-09 05:22:41.526982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:129256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.897 [2024-12-09 05:22:41.526993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000c p:0 m:0 dnr:0 01:28:39.897 [2024-12-09 05:22:41.527014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:129264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.897 [2024-12-09 05:22:41.527024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:28:39.897 [2024-12-09 05:22:41.527049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:129272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.897 [2024-12-09 05:22:41.527061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:28:39.898 [2024-12-09 05:22:41.527081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:129280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.898 [2024-12-09 05:22:41.527092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000f p:0 m:0 dnr:0 01:28:39.898 [2024-12-09 05:22:41.527113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:129288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.898 [2024-12-09 05:22:41.527124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 01:28:39.898 [2024-12-09 05:22:41.527144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:129296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.898 [2024-12-09 05:22:41.527155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 01:28:39.898 [2024-12-09 05:22:41.527176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:129304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.898 [2024-12-09 05:22:41.527187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 01:28:39.898 [2024-12-09 05:22:41.527207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:129760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.898 [2024-12-09 05:22:41.527218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:28:39.898 [2024-12-09 05:22:41.527248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:129768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.898 [2024-12-09 05:22:41.527259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 01:28:39.898 [2024-12-09 05:22:41.527280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:129776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.898 [2024-12-09 05:22:41.527291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 01:28:39.898 [2024-12-09 05:22:41.527312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:129784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.898 [2024-12-09 05:22:41.527332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:28:39.898 [2024-12-09 05:22:41.527470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:129792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.898 [2024-12-09 05:22:41.527484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:28:39.898 [2024-12-09 05:22:41.527508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:129800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.898 [2024-12-09 05:22:41.527519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:28:39.898 [2024-12-09 05:22:41.527543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:129808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.898 [2024-12-09 05:22:41.527554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:28:39.898 [2024-12-09 05:22:41.527578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:129816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.898 [2024-12-09 05:22:41.527595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:28:39.898 [2024-12-09 05:22:41.527620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:129824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.898 [2024-12-09 05:22:41.527631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:28:39.898 [2024-12-09 05:22:41.527655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:129832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.898 [2024-12-09 05:22:41.527666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:28:39.898 [2024-12-09 05:22:41.527690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:129840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.898 [2024-12-09 05:22:41.527701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:28:39.898 [2024-12-09 05:22:41.527725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:129848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.898 [2024-12-09 05:22:41.527736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:28:39.898 [2024-12-09 05:22:41.527760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:129856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.898 [2024-12-09 05:22:41.527771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001f p:0 m:0 dnr:0 01:28:39.898 [2024-12-09 05:22:41.527794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:129864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.898 [2024-12-09 05:22:41.527805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 01:28:39.898 [2024-12-09 05:22:41.527829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:129872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.898 [2024-12-09 05:22:41.527840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:28:39.898 [2024-12-09 05:22:41.527864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:129880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.898 [2024-12-09 05:22:41.527875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:28:39.898 [2024-12-09 05:22:41.527899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:129312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.898 [2024-12-09 05:22:41.527909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:28:39.898 [2024-12-09 05:22:41.527933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:129320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.898 [2024-12-09 05:22:41.527944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:28:39.898 [2024-12-09 05:22:41.527968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:129328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.898 [2024-12-09 05:22:41.527979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:28:39.898 [2024-12-09 05:22:41.528003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:129336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.898 [2024-12-09 05:22:41.528019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:28:39.898 [2024-12-09 05:22:41.528043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:129344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.898 [2024-12-09 05:22:41.528055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:28:39.898 [2024-12-09 05:22:41.528079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:129352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.898 [2024-12-09 05:22:41.528090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:28:39.898 [2024-12-09 05:22:41.528114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:129360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.898 [2024-12-09 05:22:41.528125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:28:39.898 [2024-12-09 05:22:41.528149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:129368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.898 [2024-12-09 05:22:41.528160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:28:39.898 [2024-12-09 05:22:41.528184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:129376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.898 [2024-12-09 05:22:41.528195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:28:39.898 [2024-12-09 05:22:41.528219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:129384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.898 [2024-12-09 05:22:41.528230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:28:39.898 [2024-12-09 05:22:41.528253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:129392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.898 [2024-12-09 05:22:41.528264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:28:39.898 [2024-12-09 05:22:41.528288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:129400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.898 [2024-12-09 05:22:41.528299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:28:39.898 [2024-12-09 05:22:41.528335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:129408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.898 [2024-12-09 05:22:41.528347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:28:39.898 8719.73 IOPS, 34.06 MiB/s [2024-12-09T05:23:22.354Z] 8467.62 IOPS, 33.08 MiB/s [2024-12-09T05:23:22.354Z] 8489.06 IOPS, 33.16 MiB/s [2024-12-09T05:23:22.354Z] 8508.11 IOPS, 33.23 MiB/s [2024-12-09T05:23:22.354Z] 8525.16 IOPS, 33.30 MiB/s [2024-12-09T05:23:22.354Z] 8544.90 IOPS, 33.38 MiB/s [2024-12-09T05:23:22.354Z] 8560.48 IOPS, 33.44 MiB/s [2024-12-09T05:23:22.354Z] [2024-12-09 05:22:48.315206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:15312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.898 [2024-12-09 05:22:48.315284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 01:28:39.899 [2024-12-09 05:22:48.315336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:15320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.899 [2024-12-09 05:22:48.315348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:28:39.899 [2024-12-09 05:22:48.315386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:15328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.899 [2024-12-09 05:22:48.315394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:28:39.899 [2024-12-09 05:22:48.315408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:15336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.899 [2024-12-09 05:22:48.315417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:28:39.899 [2024-12-09 05:22:48.315430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:15344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.899 [2024-12-09 05:22:48.315438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:28:39.899 [2024-12-09 05:22:48.315452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:15352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.899 [2024-12-09 05:22:48.315460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:28:39.899 [2024-12-09 05:22:48.315474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:15360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.899 [2024-12-09 05:22:48.315482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:28:39.899 [2024-12-09 05:22:48.315495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:15368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.899 [2024-12-09 05:22:48.315503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:28:39.899 [2024-12-09 05:22:48.316000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:15376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.899 [2024-12-09 05:22:48.316014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:28:39.899 [2024-12-09 05:22:48.316031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:15384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.899 [2024-12-09 05:22:48.316039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:28:39.899 [2024-12-09 05:22:48.316053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:15392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.899 [2024-12-09 05:22:48.316061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001f p:0 m:0 dnr:0 01:28:39.899 [2024-12-09 05:22:48.316075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:15400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.899 [2024-12-09 05:22:48.316084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 01:28:39.899 [2024-12-09 05:22:48.316098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:14800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.899 [2024-12-09 05:22:48.316106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:28:39.899 [2024-12-09 05:22:48.316120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:14808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.899 [2024-12-09 05:22:48.316128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:28:39.899 [2024-12-09 05:22:48.316142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:14816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.899 [2024-12-09 05:22:48.316161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:28:39.899 [2024-12-09 05:22:48.316176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:14824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.899 [2024-12-09 05:22:48.316184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:28:39.899 [2024-12-09 05:22:48.316199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:14832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.899 [2024-12-09 05:22:48.316207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:28:39.899 [2024-12-09 05:22:48.316224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:14840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.899 [2024-12-09 05:22:48.316232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:28:39.899 [2024-12-09 05:22:48.316247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:14848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.899 [2024-12-09 05:22:48.316256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:28:39.899 [2024-12-09 05:22:48.316271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:14856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.899 [2024-12-09 05:22:48.316279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:28:39.899 [2024-12-09 05:22:48.316293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.899 [2024-12-09 05:22:48.316301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:28:39.899 [2024-12-09 05:22:48.316316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.899 [2024-12-09 05:22:48.316334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:28:39.899 [2024-12-09 05:22:48.316348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:14880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.899 [2024-12-09 05:22:48.316357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:28:39.899 [2024-12-09 05:22:48.316370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:14888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.899 [2024-12-09 05:22:48.316379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:28:39.899 [2024-12-09 05:22:48.316393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:14896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.899 [2024-12-09 05:22:48.316402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:28:39.899 [2024-12-09 05:22:48.316416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:14904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.899 [2024-12-09 05:22:48.316425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:28:39.899 [2024-12-09 05:22:48.316439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:14912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.899 [2024-12-09 05:22:48.316453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:28:39.899 [2024-12-09 05:22:48.316467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:14920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.899 [2024-12-09 05:22:48.316476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 01:28:39.899 [2024-12-09 05:22:48.316490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.899 [2024-12-09 05:22:48.316499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:28:39.899 [2024-12-09 05:22:48.316513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.899 [2024-12-09 05:22:48.316521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:28:39.899 [2024-12-09 05:22:48.316535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:15424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.899 [2024-12-09 05:22:48.316544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:28:39.899 [2024-12-09 05:22:48.316558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:15432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.899 [2024-12-09 05:22:48.316567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:28:39.899 [2024-12-09 05:22:48.316684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:15440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.899 [2024-12-09 05:22:48.316695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:28:39.899 [2024-12-09 05:22:48.316712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:15448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.899 [2024-12-09 05:22:48.316720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:28:39.899 [2024-12-09 05:22:48.316735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:15456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.899 [2024-12-09 05:22:48.316745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 01:28:39.899 [2024-12-09 05:22:48.316761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:15464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.899 [2024-12-09 05:22:48.316769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 01:28:39.899 [2024-12-09 05:22:48.316784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:15472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.899 [2024-12-09 05:22:48.316793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:28:39.899 [2024-12-09 05:22:48.316808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:15480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.899 [2024-12-09 05:22:48.316817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:28:39.900 [2024-12-09 05:22:48.316832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:15488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.900 [2024-12-09 05:22:48.316841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:28:39.900 [2024-12-09 05:22:48.316863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.900 [2024-12-09 05:22:48.316871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003c p:0 m:0 dnr:0 01:28:39.900 [2024-12-09 05:22:48.316887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:14928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.900 [2024-12-09 05:22:48.316895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:28:39.900 [2024-12-09 05:22:48.316910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:14936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.900 [2024-12-09 05:22:48.316919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:28:39.900 [2024-12-09 05:22:48.316934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:14944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.900 [2024-12-09 05:22:48.316942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:28:39.900 [2024-12-09 05:22:48.316957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:14952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.900 [2024-12-09 05:22:48.316965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:28:39.900 [2024-12-09 05:22:48.316981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:14960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.900 [2024-12-09 05:22:48.316989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:28:39.900 [2024-12-09 05:22:48.317004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:14968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.900 [2024-12-09 05:22:48.317013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:28:39.900 [2024-12-09 05:22:48.317028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:14976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.900 [2024-12-09 05:22:48.317037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:28:39.900 [2024-12-09 05:22:48.317053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:14984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.900 [2024-12-09 05:22:48.317062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:28:39.900 [2024-12-09 05:22:48.317079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.900 [2024-12-09 05:22:48.317088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:28:39.900 [2024-12-09 05:22:48.317103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:15512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.900 [2024-12-09 05:22:48.317111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:28:39.900 [2024-12-09 05:22:48.317127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.900 [2024-12-09 05:22:48.317135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:28:39.900 [2024-12-09 05:22:48.317154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.900 [2024-12-09 05:22:48.317163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:28:39.900 [2024-12-09 05:22:48.317178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:15536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.900 [2024-12-09 05:22:48.317186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:28:39.900 [2024-12-09 05:22:48.317202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:15544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.900 [2024-12-09 05:22:48.317211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:28:39.900 [2024-12-09 05:22:48.317225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:15552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.900 [2024-12-09 05:22:48.317234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:28:39.900 [2024-12-09 05:22:48.317249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:15560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.900 [2024-12-09 05:22:48.317258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004c p:0 m:0 dnr:0 01:28:39.900 [2024-12-09 05:22:48.317893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.900 [2024-12-09 05:22:48.317905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:28:39.900 [2024-12-09 05:22:48.317922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.900 [2024-12-09 05:22:48.317931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:28:39.900 [2024-12-09 05:22:48.317948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:15584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.900 [2024-12-09 05:22:48.317957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:28:39.900 [2024-12-09 05:22:48.317973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:15592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.900 [2024-12-09 05:22:48.317982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:28:39.900 [2024-12-09 05:22:48.317998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:15600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.900 [2024-12-09 05:22:48.318006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:28:39.900 [2024-12-09 05:22:48.318023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:15608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.900 [2024-12-09 05:22:48.318031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:28:39.900 [2024-12-09 05:22:48.318047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:15616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.900 [2024-12-09 05:22:48.318056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:28:39.900 [2024-12-09 05:22:48.318072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:15624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.900 [2024-12-09 05:22:48.318088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:28:39.900 [2024-12-09 05:22:48.318105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.900 [2024-12-09 05:22:48.318113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:28:39.900 [2024-12-09 05:22:48.318129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:15000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.900 [2024-12-09 05:22:48.318138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:28:39.900 [2024-12-09 05:22:48.318154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:15008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.900 [2024-12-09 05:22:48.318163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:28:39.900 [2024-12-09 05:22:48.318179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:15016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.900 [2024-12-09 05:22:48.318188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:28:39.900 [2024-12-09 05:22:48.318205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:15024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.900 [2024-12-09 05:22:48.318213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:28:39.900 [2024-12-09 05:22:48.318229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:15032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.900 [2024-12-09 05:22:48.318238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:28:39.900 [2024-12-09 05:22:48.318254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:15040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.900 [2024-12-09 05:22:48.318262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:28:39.900 [2024-12-09 05:22:48.318279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.900 [2024-12-09 05:22:48.318287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:28:39.900 [2024-12-09 05:22:48.318304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:15056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.900 [2024-12-09 05:22:48.318312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:28:39.900 [2024-12-09 05:22:48.318337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:15064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.900 [2024-12-09 05:22:48.318346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:28:39.900 [2024-12-09 05:22:48.318364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:15072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.900 [2024-12-09 05:22:48.318373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:28:39.900 [2024-12-09 05:22:48.318389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:15080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.900 [2024-12-09 05:22:48.318419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:28:39.900 [2024-12-09 05:22:48.318438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:15088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.900 [2024-12-09 05:22:48.318447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:28:39.901 [2024-12-09 05:22:48.318464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:15096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.901 [2024-12-09 05:22:48.318473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:28:39.901 [2024-12-09 05:22:48.318490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:15104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.901 [2024-12-09 05:22:48.318499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:28:39.901 [2024-12-09 05:22:48.318516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:15112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.901 [2024-12-09 05:22:48.318525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 01:28:39.901 [2024-12-09 05:22:48.319123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:15632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.901 [2024-12-09 05:22:48.319133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:28:39.901 [2024-12-09 05:22:48.319151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:15640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.901 [2024-12-09 05:22:48.319160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:28:39.901 [2024-12-09 05:22:48.319178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:15648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.901 [2024-12-09 05:22:48.319187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:28:39.901 [2024-12-09 05:22:48.319204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.901 [2024-12-09 05:22:48.319212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:28:39.901 [2024-12-09 05:22:48.319230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.901 [2024-12-09 05:22:48.319246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:28:39.901 [2024-12-09 05:22:48.319292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:15672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.901 [2024-12-09 05:22:48.319301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:28:39.901 [2024-12-09 05:22:48.319319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.901 [2024-12-09 05:22:48.319328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:28:39.901 [2024-12-09 05:22:48.319346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:15688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.901 [2024-12-09 05:22:48.319370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:28:39.901 [2024-12-09 05:22:48.319389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:15696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.901 [2024-12-09 05:22:48.319398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:28:39.901 [2024-12-09 05:22:48.319417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:15704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.901 [2024-12-09 05:22:48.319426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:28:39.901 [2024-12-09 05:22:48.319445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:15712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.901 [2024-12-09 05:22:48.319454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:28:39.901 [2024-12-09 05:22:48.319472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:15720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.901 [2024-12-09 05:22:48.319481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:28:39.901 [2024-12-09 05:22:48.319499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:15728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.901 [2024-12-09 05:22:48.319508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:28:39.901 [2024-12-09 05:22:48.319527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:15736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.901 [2024-12-09 05:22:48.319536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:28:39.901 [2024-12-09 05:22:48.319554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:15744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.901 [2024-12-09 05:22:48.319563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:28:39.901 [2024-12-09 05:22:48.319582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.901 [2024-12-09 05:22:48.319591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:28:39.901 [2024-12-09 05:22:48.319609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:15120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.901 [2024-12-09 05:22:48.319618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:28:39.901 [2024-12-09 05:22:48.319636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.901 [2024-12-09 05:22:48.319646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:28:39.901 [2024-12-09 05:22:48.319665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:15136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.901 [2024-12-09 05:22:48.319674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 01:28:39.901 [2024-12-09 05:22:48.319693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:15144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.901 [2024-12-09 05:22:48.319702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:28:39.901 [2024-12-09 05:22:48.319724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:15152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.901 [2024-12-09 05:22:48.319733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:28:39.901 [2024-12-09 05:22:48.319752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:15160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.901 [2024-12-09 05:22:48.319760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:28:39.901 [2024-12-09 05:22:48.319779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:15168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.901 [2024-12-09 05:22:48.319788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:28:39.901 [2024-12-09 05:22:48.319806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:15176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.901 [2024-12-09 05:22:48.319815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:28:39.901 [2024-12-09 05:22:48.319833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:15184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.901 [2024-12-09 05:22:48.319843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:28:39.901 [2024-12-09 05:22:48.319861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.901 [2024-12-09 05:22:48.319870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:28:39.901 [2024-12-09 05:22:48.319888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:15200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.901 [2024-12-09 05:22:48.319897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:28:39.901 [2024-12-09 05:22:48.319915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:15208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.901 [2024-12-09 05:22:48.319924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.901 [2024-12-09 05:22:48.319942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:15216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.901 [2024-12-09 05:22:48.319952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:28:39.901 [2024-12-09 05:22:48.319970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:15224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.901 [2024-12-09 05:22:48.319979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:28:39.901 [2024-12-09 05:22:48.319998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:15232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.901 [2024-12-09 05:22:48.320007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 01:28:39.901 [2024-12-09 05:22:48.320025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:15240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.901 [2024-12-09 05:22:48.320034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 01:28:39.901 [2024-12-09 05:22:48.320076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:15760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.901 [2024-12-09 05:22:48.320087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 01:28:39.901 [2024-12-09 05:22:48.320105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:15768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.901 [2024-12-09 05:22:48.320114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 01:28:39.901 [2024-12-09 05:22:48.320133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:15776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.901 [2024-12-09 05:22:48.320142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 01:28:39.901 [2024-12-09 05:22:48.320160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:15784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.901 [2024-12-09 05:22:48.320170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 01:28:39.902 [2024-12-09 05:22:48.320188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:15792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.902 [2024-12-09 05:22:48.320197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:28:39.902 [2024-12-09 05:22:48.320215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:15800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.902 [2024-12-09 05:22:48.320225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000a p:0 m:0 dnr:0 01:28:39.902 [2024-12-09 05:22:48.320243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:15808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.902 [2024-12-09 05:22:48.320252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000b p:0 m:0 dnr:0 01:28:39.902 [2024-12-09 05:22:48.320271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:15816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.902 [2024-12-09 05:22:48.320281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000c p:0 m:0 dnr:0 01:28:39.902 [2024-12-09 05:22:48.320300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:15248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.902 [2024-12-09 05:22:48.320308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:28:39.902 [2024-12-09 05:22:48.320336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:15256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.902 [2024-12-09 05:22:48.320346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:28:39.902 [2024-12-09 05:22:48.320365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:15264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.902 [2024-12-09 05:22:48.320374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000f p:0 m:0 dnr:0 01:28:39.902 [2024-12-09 05:22:48.320392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:15272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.902 [2024-12-09 05:22:48.320401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 01:28:39.902 [2024-12-09 05:22:48.320420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:15280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.902 [2024-12-09 05:22:48.320437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 01:28:39.902 [2024-12-09 05:22:48.320456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:15288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.902 [2024-12-09 05:22:48.320465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 01:28:39.902 [2024-12-09 05:22:48.320500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.902 [2024-12-09 05:22:48.320509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:28:39.902 [2024-12-09 05:22:48.320526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.902 [2024-12-09 05:22:48.320535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 01:28:39.902 8276.09 IOPS, 32.33 MiB/s [2024-12-09T05:23:22.358Z] 7916.26 IOPS, 30.92 MiB/s [2024-12-09T05:23:22.358Z] 7586.42 IOPS, 29.63 MiB/s [2024-12-09T05:23:22.358Z] 7282.96 IOPS, 28.45 MiB/s [2024-12-09T05:23:22.358Z] 7002.85 IOPS, 27.35 MiB/s [2024-12-09T05:23:22.358Z] 6743.48 IOPS, 26.34 MiB/s [2024-12-09T05:23:22.358Z] 6502.64 IOPS, 25.40 MiB/s [2024-12-09T05:23:22.358Z] 6498.55 IOPS, 25.38 MiB/s [2024-12-09T05:23:22.358Z] 6573.40 IOPS, 25.68 MiB/s [2024-12-09T05:23:22.358Z] 6648.84 IOPS, 25.97 MiB/s [2024-12-09T05:23:22.358Z] 6714.81 IOPS, 26.23 MiB/s [2024-12-09T05:23:22.358Z] 6777.27 IOPS, 26.47 MiB/s [2024-12-09T05:23:22.358Z] 6830.41 IOPS, 26.68 MiB/s [2024-12-09T05:23:22.358Z] [2024-12-09 05:23:01.395263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:48016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.902 [2024-12-09 05:23:01.395339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:28:39.902 [2024-12-09 05:23:01.395386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:48024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.902 [2024-12-09 05:23:01.395397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:28:39.902 [2024-12-09 05:23:01.395411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:48032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.902 [2024-12-09 05:23:01.395419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000f p:0 m:0 dnr:0 01:28:39.902 [2024-12-09 05:23:01.395433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:48040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.902 [2024-12-09 05:23:01.395442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 01:28:39.902 [2024-12-09 05:23:01.395455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:48048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.902 [2024-12-09 05:23:01.395464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 01:28:39.902 [2024-12-09 05:23:01.395478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:48056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.902 [2024-12-09 05:23:01.395486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 01:28:39.902 [2024-12-09 05:23:01.395499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:48064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.902 [2024-12-09 05:23:01.395507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:28:39.902 [2024-12-09 05:23:01.395521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:48072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.902 [2024-12-09 05:23:01.395550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 01:28:39.902 [2024-12-09 05:23:01.395564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:48080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.902 [2024-12-09 05:23:01.395572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 01:28:39.902 [2024-12-09 05:23:01.395586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:48088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.902 [2024-12-09 05:23:01.395594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:28:39.902 [2024-12-09 05:23:01.395608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:48096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.902 [2024-12-09 05:23:01.395616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:28:39.902 [2024-12-09 05:23:01.395630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:48104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.902 [2024-12-09 05:23:01.395638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:28:39.902 [2024-12-09 05:23:01.395651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:48112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.902 [2024-12-09 05:23:01.395660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:28:39.902 [2024-12-09 05:23:01.395673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:48120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.902 [2024-12-09 05:23:01.395682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:28:39.902 [2024-12-09 05:23:01.395696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:48128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.902 [2024-12-09 05:23:01.395704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:28:39.902 [2024-12-09 05:23:01.395717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:48136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.902 [2024-12-09 05:23:01.395725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:28:39.902 [2024-12-09 05:23:01.395739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:47568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.902 [2024-12-09 05:23:01.395747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:28:39.902 [2024-12-09 05:23:01.395763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:47576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.902 [2024-12-09 05:23:01.395771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:28:39.902 [2024-12-09 05:23:01.395785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:47584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.902 [2024-12-09 05:23:01.395793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001f p:0 m:0 dnr:0 01:28:39.902 [2024-12-09 05:23:01.395806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:47592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.902 [2024-12-09 05:23:01.395820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 01:28:39.902 [2024-12-09 05:23:01.395834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:47600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.902 [2024-12-09 05:23:01.395843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:28:39.902 [2024-12-09 05:23:01.395856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:47608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.902 [2024-12-09 05:23:01.395865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:28:39.902 [2024-12-09 05:23:01.395878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:47616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.902 [2024-12-09 05:23:01.395887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:28:39.902 [2024-12-09 05:23:01.395901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:47624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.902 [2024-12-09 05:23:01.395909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:28:39.902 [2024-12-09 05:23:01.395948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:48144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.903 [2024-12-09 05:23:01.395958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.903 [2024-12-09 05:23:01.395967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:48152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.903 [2024-12-09 05:23:01.395975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.903 [2024-12-09 05:23:01.395985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:48160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.903 [2024-12-09 05:23:01.395993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.903 [2024-12-09 05:23:01.396002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:48168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.903 [2024-12-09 05:23:01.396010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.903 [2024-12-09 05:23:01.396019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:48176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.903 [2024-12-09 05:23:01.396027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.903 [2024-12-09 05:23:01.396037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:48184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.903 [2024-12-09 05:23:01.396045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.903 [2024-12-09 05:23:01.396054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:48192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.903 [2024-12-09 05:23:01.396062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.903 [2024-12-09 05:23:01.396071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:48200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.903 [2024-12-09 05:23:01.396079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.903 [2024-12-09 05:23:01.396094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:47632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.903 [2024-12-09 05:23:01.396102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.903 [2024-12-09 05:23:01.396112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:47640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.903 [2024-12-09 05:23:01.396120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.903 [2024-12-09 05:23:01.396129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:47648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.903 [2024-12-09 05:23:01.396138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.903 [2024-12-09 05:23:01.396148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:47656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.903 [2024-12-09 05:23:01.396156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.903 [2024-12-09 05:23:01.396165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:47664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.903 [2024-12-09 05:23:01.396174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.903 [2024-12-09 05:23:01.396183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:47672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.903 [2024-12-09 05:23:01.396192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.903 [2024-12-09 05:23:01.396201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:47680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.903 [2024-12-09 05:23:01.396209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.903 [2024-12-09 05:23:01.396219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:47688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.903 [2024-12-09 05:23:01.396226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.903 [2024-12-09 05:23:01.396236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:47696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.903 [2024-12-09 05:23:01.396244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.903 [2024-12-09 05:23:01.396253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:47704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.903 [2024-12-09 05:23:01.396262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.903 [2024-12-09 05:23:01.396271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:47712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.903 [2024-12-09 05:23:01.396279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.903 [2024-12-09 05:23:01.396289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:47720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.903 [2024-12-09 05:23:01.396297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.903 [2024-12-09 05:23:01.396307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:47728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.903 [2024-12-09 05:23:01.396319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.903 [2024-12-09 05:23:01.396340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:47736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.903 [2024-12-09 05:23:01.396349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.903 [2024-12-09 05:23:01.396358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:47744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.903 [2024-12-09 05:23:01.396366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.903 [2024-12-09 05:23:01.396375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:47752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.903 [2024-12-09 05:23:01.396383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.903 [2024-12-09 05:23:01.396393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:48208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.903 [2024-12-09 05:23:01.396401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.903 [2024-12-09 05:23:01.396412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:48216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.903 [2024-12-09 05:23:01.396421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.903 [2024-12-09 05:23:01.396432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:48224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.903 [2024-12-09 05:23:01.396440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.903 [2024-12-09 05:23:01.396451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:48232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.903 [2024-12-09 05:23:01.396459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.903 [2024-12-09 05:23:01.396469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:48240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.903 [2024-12-09 05:23:01.396477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.903 [2024-12-09 05:23:01.396487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:48248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.903 [2024-12-09 05:23:01.396495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.903 [2024-12-09 05:23:01.396505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:48256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.903 [2024-12-09 05:23:01.396514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.903 [2024-12-09 05:23:01.396524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:48264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.903 [2024-12-09 05:23:01.396532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.903 [2024-12-09 05:23:01.396541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:48272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.903 [2024-12-09 05:23:01.396550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.903 [2024-12-09 05:23:01.396559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:48280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.903 [2024-12-09 05:23:01.396571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.903 [2024-12-09 05:23:01.396581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:48288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.903 [2024-12-09 05:23:01.396590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.903 [2024-12-09 05:23:01.396600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:48296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.903 [2024-12-09 05:23:01.396608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.903 [2024-12-09 05:23:01.396618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:48304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.903 [2024-12-09 05:23:01.396626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.904 [2024-12-09 05:23:01.396636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:48312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.904 [2024-12-09 05:23:01.396644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.904 [2024-12-09 05:23:01.396653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:48320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.904 [2024-12-09 05:23:01.396662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.904 [2024-12-09 05:23:01.396671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:48328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.904 [2024-12-09 05:23:01.396679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.904 [2024-12-09 05:23:01.396688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:47760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.904 [2024-12-09 05:23:01.396696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.904 [2024-12-09 05:23:01.396707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:47768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.904 [2024-12-09 05:23:01.396715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.904 [2024-12-09 05:23:01.396725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:47776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.904 [2024-12-09 05:23:01.396733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.904 [2024-12-09 05:23:01.396742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:47784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.904 [2024-12-09 05:23:01.396750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.904 [2024-12-09 05:23:01.396759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:47792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.904 [2024-12-09 05:23:01.396768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.904 [2024-12-09 05:23:01.396777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:47800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.904 [2024-12-09 05:23:01.396785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.904 [2024-12-09 05:23:01.396802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:47808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.904 [2024-12-09 05:23:01.396810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.904 [2024-12-09 05:23:01.396819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:47816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.904 [2024-12-09 05:23:01.396828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.904 [2024-12-09 05:23:01.396838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:48336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.904 [2024-12-09 05:23:01.396846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.904 [2024-12-09 05:23:01.396855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:48344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.904 [2024-12-09 05:23:01.396863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.904 [2024-12-09 05:23:01.396872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:48352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.904 [2024-12-09 05:23:01.396880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.904 [2024-12-09 05:23:01.396890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:48360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.904 [2024-12-09 05:23:01.396898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.904 [2024-12-09 05:23:01.396907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:48368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.904 [2024-12-09 05:23:01.396915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.904 [2024-12-09 05:23:01.396924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:48376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.904 [2024-12-09 05:23:01.396932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.904 [2024-12-09 05:23:01.396941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:48384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.904 [2024-12-09 05:23:01.396950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.904 [2024-12-09 05:23:01.396960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:48392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.904 [2024-12-09 05:23:01.396968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.904 [2024-12-09 05:23:01.396978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:47824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.904 [2024-12-09 05:23:01.396986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.904 [2024-12-09 05:23:01.396996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:47832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.904 [2024-12-09 05:23:01.397004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.904 [2024-12-09 05:23:01.397014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:47840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.904 [2024-12-09 05:23:01.397025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.904 [2024-12-09 05:23:01.397035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:47848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.904 [2024-12-09 05:23:01.397043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.904 [2024-12-09 05:23:01.397052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:47856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.904 [2024-12-09 05:23:01.397061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.904 [2024-12-09 05:23:01.397070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:47864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.904 [2024-12-09 05:23:01.397078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.904 [2024-12-09 05:23:01.397087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:47872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.904 [2024-12-09 05:23:01.397095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.904 [2024-12-09 05:23:01.397105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:47880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.904 [2024-12-09 05:23:01.397113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.904 [2024-12-09 05:23:01.397122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:48400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.904 [2024-12-09 05:23:01.397130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.904 [2024-12-09 05:23:01.397140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:48408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.904 [2024-12-09 05:23:01.397148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.904 [2024-12-09 05:23:01.397157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:48416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.904 [2024-12-09 05:23:01.397165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.904 [2024-12-09 05:23:01.397176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:48424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.904 [2024-12-09 05:23:01.397184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.904 [2024-12-09 05:23:01.397193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:48432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.904 [2024-12-09 05:23:01.397201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.904 [2024-12-09 05:23:01.397211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:48440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.904 [2024-12-09 05:23:01.397219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.904 [2024-12-09 05:23:01.397233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:48448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.904 [2024-12-09 05:23:01.397241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.904 [2024-12-09 05:23:01.397255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:48456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:39.904 [2024-12-09 05:23:01.397263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.904 [2024-12-09 05:23:01.397272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:47888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.904 [2024-12-09 05:23:01.397281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.904 [2024-12-09 05:23:01.397290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:47896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.904 [2024-12-09 05:23:01.397298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.904 [2024-12-09 05:23:01.397308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:47904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.904 [2024-12-09 05:23:01.397316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.904 [2024-12-09 05:23:01.397333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:47912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.904 [2024-12-09 05:23:01.397341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.904 [2024-12-09 05:23:01.397351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:47920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.905 [2024-12-09 05:23:01.397359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.905 [2024-12-09 05:23:01.397369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:47928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.905 [2024-12-09 05:23:01.397377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.905 [2024-12-09 05:23:01.397386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:47936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.905 [2024-12-09 05:23:01.397394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.905 [2024-12-09 05:23:01.397405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:47944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.905 [2024-12-09 05:23:01.397413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.905 [2024-12-09 05:23:01.397423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:47952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.905 [2024-12-09 05:23:01.397431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.905 [2024-12-09 05:23:01.397440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:47960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.905 [2024-12-09 05:23:01.397448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.905 [2024-12-09 05:23:01.397457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:47968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.905 [2024-12-09 05:23:01.397465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.905 [2024-12-09 05:23:01.397475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:47976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.905 [2024-12-09 05:23:01.397486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.905 [2024-12-09 05:23:01.397496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:47984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.905 [2024-12-09 05:23:01.397505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.905 [2024-12-09 05:23:01.397515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:47992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.905 [2024-12-09 05:23:01.397523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.905 [2024-12-09 05:23:01.397535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:48000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.905 [2024-12-09 05:23:01.397543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.905 [2024-12-09 05:23:01.397553] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d74310 is same with the state(6) to be set 01:28:39.905 [2024-12-09 05:23:01.397563] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:28:39.905 [2024-12-09 05:23:01.397569] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:28:39.905 [2024-12-09 05:23:01.397576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:48008 len:8 PRP1 0x0 PRP2 0x0 01:28:39.905 [2024-12-09 05:23:01.397584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.905 [2024-12-09 05:23:01.397593] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:28:39.905 [2024-12-09 05:23:01.397599] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:28:39.905 [2024-12-09 05:23:01.397606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48464 len:8 PRP1 0x0 PRP2 0x0 01:28:39.905 [2024-12-09 05:23:01.397614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.905 [2024-12-09 05:23:01.397622] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:28:39.905 [2024-12-09 05:23:01.397628] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:28:39.905 [2024-12-09 05:23:01.397634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48472 len:8 PRP1 0x0 PRP2 0x0 01:28:39.905 [2024-12-09 05:23:01.397642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.905 [2024-12-09 05:23:01.397651] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:28:39.905 [2024-12-09 05:23:01.397656] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:28:39.905 [2024-12-09 05:23:01.397662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48480 len:8 PRP1 0x0 PRP2 0x0 01:28:39.905 [2024-12-09 05:23:01.397670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.905 [2024-12-09 05:23:01.397677] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:28:39.905 [2024-12-09 05:23:01.397683] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:28:39.905 [2024-12-09 05:23:01.397690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48488 len:8 PRP1 0x0 PRP2 0x0 01:28:39.905 [2024-12-09 05:23:01.397697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.905 [2024-12-09 05:23:01.397705] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:28:39.905 [2024-12-09 05:23:01.397714] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:28:39.905 [2024-12-09 05:23:01.397726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48496 len:8 PRP1 0x0 PRP2 0x0 01:28:39.905 [2024-12-09 05:23:01.397734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.905 [2024-12-09 05:23:01.397742] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:28:39.905 [2024-12-09 05:23:01.397749] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:28:39.905 [2024-12-09 05:23:01.397755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48504 len:8 PRP1 0x0 PRP2 0x0 01:28:39.905 [2024-12-09 05:23:01.397763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.905 [2024-12-09 05:23:01.397771] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:28:39.905 [2024-12-09 05:23:01.397779] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:28:39.905 [2024-12-09 05:23:01.397785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48512 len:8 PRP1 0x0 PRP2 0x0 01:28:39.905 [2024-12-09 05:23:01.397793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.905 [2024-12-09 05:23:01.397802] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:28:39.905 [2024-12-09 05:23:01.397807] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:28:39.905 [2024-12-09 05:23:01.397813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48520 len:8 PRP1 0x0 PRP2 0x0 01:28:39.905 [2024-12-09 05:23:01.397821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.905 [2024-12-09 05:23:01.397829] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:28:39.905 [2024-12-09 05:23:01.397835] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:28:39.905 [2024-12-09 05:23:01.397841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48528 len:8 PRP1 0x0 PRP2 0x0 01:28:39.905 [2024-12-09 05:23:01.397849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.905 [2024-12-09 05:23:01.397857] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:28:39.905 [2024-12-09 05:23:01.397863] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:28:39.905 [2024-12-09 05:23:01.397868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48536 len:8 PRP1 0x0 PRP2 0x0 01:28:39.905 [2024-12-09 05:23:01.397876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.905 [2024-12-09 05:23:01.397884] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:28:39.905 [2024-12-09 05:23:01.397889] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:28:39.905 [2024-12-09 05:23:01.397896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48544 len:8 PRP1 0x0 PRP2 0x0 01:28:39.905 [2024-12-09 05:23:01.397904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.905 [2024-12-09 05:23:01.397912] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:28:39.905 [2024-12-09 05:23:01.397917] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:28:39.905 [2024-12-09 05:23:01.397924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48552 len:8 PRP1 0x0 PRP2 0x0 01:28:39.905 [2024-12-09 05:23:01.397931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.905 [2024-12-09 05:23:01.397942] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:28:39.905 [2024-12-09 05:23:01.397950] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:28:39.905 [2024-12-09 05:23:01.397957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48560 len:8 PRP1 0x0 PRP2 0x0 01:28:39.905 [2024-12-09 05:23:01.397965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.905 [2024-12-09 05:23:01.397973] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:28:39.905 [2024-12-09 05:23:01.397978] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:28:39.905 [2024-12-09 05:23:01.397984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48568 len:8 PRP1 0x0 PRP2 0x0 01:28:39.905 [2024-12-09 05:23:01.397992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.905 [2024-12-09 05:23:01.398001] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:28:39.905 [2024-12-09 05:23:01.398008] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:28:39.905 [2024-12-09 05:23:01.398014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48576 len:8 PRP1 0x0 PRP2 0x0 01:28:39.905 [2024-12-09 05:23:01.398022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.905 [2024-12-09 05:23:01.398030] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:28:39.905 [2024-12-09 05:23:01.398036] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:28:39.905 [2024-12-09 05:23:01.398042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48584 len:8 PRP1 0x0 PRP2 0x0 01:28:39.906 [2024-12-09 05:23:01.398050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.906 [2024-12-09 05:23:01.398963] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:28:39.906 [2024-12-09 05:23:01.399028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:39.906 [2024-12-09 05:23:01.399041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:39.906 [2024-12-09 05:23:01.399064] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce51e0 (9): Bad file descriptor 01:28:39.906 [2024-12-09 05:23:01.399425] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 01:28:39.906 [2024-12-09 05:23:01.399450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce51e0 with addr=10.0.0.3, port=4421 01:28:39.906 [2024-12-09 05:23:01.399460] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce51e0 is same with the state(6) to be set 01:28:39.906 [2024-12-09 05:23:01.399507] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce51e0 (9): Bad file descriptor 01:28:39.906 [2024-12-09 05:23:01.399527] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:28:39.906 [2024-12-09 05:23:01.399536] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:28:39.906 [2024-12-09 05:23:01.399546] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:28:39.906 [2024-12-09 05:23:01.399555] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:28:39.906 [2024-12-09 05:23:01.415278] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:28:39.906 6872.26 IOPS, 26.84 MiB/s [2024-12-09T05:23:22.362Z] 6911.14 IOPS, 27.00 MiB/s [2024-12-09T05:23:22.362Z] 6952.46 IOPS, 27.16 MiB/s [2024-12-09T05:23:22.362Z] 6998.76 IOPS, 27.34 MiB/s [2024-12-09T05:23:22.362Z] 7040.44 IOPS, 27.50 MiB/s [2024-12-09T05:23:22.362Z] 7080.82 IOPS, 27.66 MiB/s [2024-12-09T05:23:22.362Z] 7117.49 IOPS, 27.80 MiB/s [2024-12-09T05:23:22.362Z] 7157.93 IOPS, 27.96 MiB/s [2024-12-09T05:23:22.362Z] 7215.09 IOPS, 28.18 MiB/s [2024-12-09T05:23:22.362Z] 7268.02 IOPS, 28.39 MiB/s [2024-12-09T05:23:22.362Z] [2024-12-09 05:23:11.434793] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 01:28:39.906 7321.60 IOPS, 28.60 MiB/s [2024-12-09T05:23:22.362Z] 7368.09 IOPS, 28.78 MiB/s [2024-12-09T05:23:22.362Z] 7438.98 IOPS, 29.06 MiB/s [2024-12-09T05:23:22.362Z] 7524.00 IOPS, 29.39 MiB/s [2024-12-09T05:23:22.362Z] 7602.94 IOPS, 29.70 MiB/s [2024-12-09T05:23:22.362Z] 7679.04 IOPS, 30.00 MiB/s [2024-12-09T05:23:22.362Z] 7750.04 IOPS, 30.27 MiB/s [2024-12-09T05:23:22.362Z] 7818.27 IOPS, 30.54 MiB/s [2024-12-09T05:23:22.362Z] 7883.28 IOPS, 30.79 MiB/s [2024-12-09T05:23:22.362Z] 7946.37 IOPS, 31.04 MiB/s [2024-12-09T05:23:22.362Z] Received shutdown signal, test time was about 54.398493 seconds 01:28:39.906 01:28:39.906 Latency(us) 01:28:39.906 [2024-12-09T05:23:22.362Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:28:39.906 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 01:28:39.906 Verification LBA range: start 0x0 length 0x4000 01:28:39.906 Nvme0n1 : 54.40 7965.67 31.12 0.00 0.00 16049.98 568.79 7033243.39 01:28:39.906 [2024-12-09T05:23:22.362Z] =================================================================================================================== 01:28:39.906 [2024-12-09T05:23:22.362Z] Total : 7965.67 31.12 0.00 0.00 16049.98 568.79 7033243.39 01:28:39.906 05:23:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:28:39.906 05:23:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 01:28:39.906 05:23:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 01:28:39.906 05:23:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 01:28:39.906 05:23:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 01:28:39.906 05:23:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 01:28:39.906 05:23:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:28:39.906 05:23:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 01:28:39.906 05:23:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 01:28:39.906 05:23:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:28:39.906 rmmod nvme_tcp 01:28:39.906 rmmod nvme_fabrics 01:28:39.906 rmmod nvme_keyring 01:28:39.906 05:23:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:28:39.906 05:23:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 01:28:39.906 05:23:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 01:28:39.906 05:23:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@517 -- # '[' -n 80676 ']' 01:28:39.906 05:23:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # killprocess 80676 01:28:39.906 05:23:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 80676 ']' 01:28:39.906 05:23:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 80676 01:28:39.906 05:23:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 01:28:39.906 05:23:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:28:39.906 05:23:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80676 01:28:39.906 05:23:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:28:39.906 05:23:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:28:39.906 killing process with pid 80676 01:28:39.906 05:23:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80676' 01:28:39.906 05:23:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 80676 01:28:39.906 05:23:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 80676 01:28:40.166 05:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:28:40.166 05:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:28:40.166 05:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:28:40.166 05:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 01:28:40.166 05:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-save 01:28:40.166 05:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-restore 01:28:40.166 05:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:28:40.166 05:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:28:40.166 05:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:28:40.166 05:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:28:40.166 05:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:28:40.166 05:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:28:40.166 05:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:28:40.166 05:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:28:40.166 05:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:28:40.166 05:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:28:40.166 05:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:28:40.166 05:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:28:40.166 05:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:28:40.166 05:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:28:40.166 05:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:28:40.166 05:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:28:40.166 05:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 01:28:40.166 05:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:28:40.166 05:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:28:40.166 05:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:28:40.166 05:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 01:28:40.425 01:28:40.425 real 1m0.231s 01:28:40.425 user 2m47.376s 01:28:40.425 sys 0m16.378s 01:28:40.425 05:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 01:28:40.425 05:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 01:28:40.425 ************************************ 01:28:40.425 END TEST nvmf_host_multipath 01:28:40.425 ************************************ 01:28:40.425 05:23:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 01:28:40.425 05:23:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:28:40.425 05:23:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 01:28:40.425 05:23:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 01:28:40.425 ************************************ 01:28:40.425 START TEST nvmf_timeout 01:28:40.425 ************************************ 01:28:40.425 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 01:28:40.425 * Looking for test storage... 01:28:40.425 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:28:40.425 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:28:40.425 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # lcov --version 01:28:40.425 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:28:40.425 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:28:40.425 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:28:40.425 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 01:28:40.425 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 01:28:40.425 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 01:28:40.425 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 01:28:40.425 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 01:28:40.425 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 01:28:40.425 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 01:28:40.425 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 01:28:40.425 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 01:28:40.425 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:28:40.425 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 01:28:40.425 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 01:28:40.693 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 01:28:40.693 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:28:40.693 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 01:28:40.693 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 01:28:40.693 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:28:40.693 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 01:28:40.693 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 01:28:40.693 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 01:28:40.693 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 01:28:40.693 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:28:40.693 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 01:28:40.693 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 01:28:40.693 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:28:40.693 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:28:40.693 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 01:28:40.693 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:28:40.693 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:28:40.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:28:40.693 --rc genhtml_branch_coverage=1 01:28:40.693 --rc genhtml_function_coverage=1 01:28:40.693 --rc genhtml_legend=1 01:28:40.693 --rc geninfo_all_blocks=1 01:28:40.693 --rc geninfo_unexecuted_blocks=1 01:28:40.693 01:28:40.693 ' 01:28:40.693 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:28:40.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:28:40.693 --rc genhtml_branch_coverage=1 01:28:40.693 --rc genhtml_function_coverage=1 01:28:40.693 --rc genhtml_legend=1 01:28:40.693 --rc geninfo_all_blocks=1 01:28:40.693 --rc geninfo_unexecuted_blocks=1 01:28:40.693 01:28:40.693 ' 01:28:40.693 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:28:40.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:28:40.693 --rc genhtml_branch_coverage=1 01:28:40.693 --rc genhtml_function_coverage=1 01:28:40.693 --rc genhtml_legend=1 01:28:40.693 --rc geninfo_all_blocks=1 01:28:40.693 --rc geninfo_unexecuted_blocks=1 01:28:40.693 01:28:40.693 ' 01:28:40.693 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:28:40.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:28:40.693 --rc genhtml_branch_coverage=1 01:28:40.693 --rc genhtml_function_coverage=1 01:28:40.693 --rc genhtml_legend=1 01:28:40.693 --rc geninfo_all_blocks=1 01:28:40.693 --rc geninfo_unexecuted_blocks=1 01:28:40.693 01:28:40.693 ' 01:28:40.693 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:28:40.693 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 01:28:40.693 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:28:40.693 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:28:40.693 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:28:40.693 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:28:40.693 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:28:40.693 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:28:40.693 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:28:40.693 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:28:40.693 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:28:40.693 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:28:40.693 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:28:40.693 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:28:40.693 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:28:40.693 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:28:40.693 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:28:40.693 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:28:40.693 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:28:40.693 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 01:28:40.693 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:28:40.693 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:28:40.693 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:28:40.693 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:28:40.693 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:28:40.693 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:28:40.693 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 01:28:40.693 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:28:40.694 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 01:28:40.694 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:28:40.694 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:28:40.694 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:28:40.694 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:28:40.694 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:28:40.694 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:28:40.694 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:28:40.694 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:28:40.694 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:28:40.694 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 01:28:40.694 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 01:28:40.694 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:28:40.694 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:28:40.694 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 01:28:40.694 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:28:40.694 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 01:28:40.694 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:28:40.694 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:28:40.694 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 01:28:40.694 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 01:28:40.694 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 01:28:40.694 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:28:40.694 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:28:40.694 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:28:40.694 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:28:40.694 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:28:40.694 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:28:40.694 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:28:40.694 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:28:40.694 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 01:28:40.694 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:28:40.694 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:28:40.694 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:28:40.694 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:28:40.694 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:28:40.694 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:28:40.694 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:28:40.694 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:28:40.694 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:28:40.694 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:28:40.694 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:28:40.694 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:28:40.694 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:28:40.694 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:28:40.694 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:28:40.694 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:28:40.694 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:28:40.694 Cannot find device "nvmf_init_br" 01:28:40.694 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 01:28:40.694 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:28:40.694 Cannot find device "nvmf_init_br2" 01:28:40.694 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 01:28:40.694 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:28:40.694 Cannot find device "nvmf_tgt_br" 01:28:40.694 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 01:28:40.694 05:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:28:40.694 Cannot find device "nvmf_tgt_br2" 01:28:40.694 05:23:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 01:28:40.694 05:23:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:28:40.694 Cannot find device "nvmf_init_br" 01:28:40.694 05:23:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 01:28:40.694 05:23:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:28:40.694 Cannot find device "nvmf_init_br2" 01:28:40.694 05:23:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 01:28:40.694 05:23:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:28:40.694 Cannot find device "nvmf_tgt_br" 01:28:40.694 05:23:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 01:28:40.694 05:23:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:28:40.694 Cannot find device "nvmf_tgt_br2" 01:28:40.694 05:23:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 01:28:40.694 05:23:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:28:40.694 Cannot find device "nvmf_br" 01:28:40.694 05:23:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 01:28:40.694 05:23:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:28:40.694 Cannot find device "nvmf_init_if" 01:28:40.694 05:23:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 01:28:40.694 05:23:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:28:40.694 Cannot find device "nvmf_init_if2" 01:28:40.694 05:23:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 01:28:40.694 05:23:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:28:40.694 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:28:40.694 05:23:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 01:28:40.694 05:23:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:28:40.694 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:28:40.694 05:23:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 01:28:40.694 05:23:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:28:40.694 05:23:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:28:40.967 05:23:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:28:40.967 05:23:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:28:40.967 05:23:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:28:40.967 05:23:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:28:40.967 05:23:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:28:40.967 05:23:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:28:40.967 05:23:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:28:40.967 05:23:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:28:40.967 05:23:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:28:40.967 05:23:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:28:40.967 05:23:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:28:40.967 05:23:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:28:40.967 05:23:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:28:40.967 05:23:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:28:40.967 05:23:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:28:40.967 05:23:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:28:40.967 05:23:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:28:40.967 05:23:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:28:40.967 05:23:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:28:40.967 05:23:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:28:40.967 05:23:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:28:40.967 05:23:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:28:40.967 05:23:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:28:40.967 05:23:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:28:40.967 05:23:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:28:40.967 05:23:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:28:40.967 05:23:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:28:40.967 05:23:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:28:40.967 05:23:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:28:40.967 05:23:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:28:40.967 05:23:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:28:40.967 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:28:40.967 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.121 ms 01:28:40.967 01:28:40.967 --- 10.0.0.3 ping statistics --- 01:28:40.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:28:40.967 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 01:28:40.967 05:23:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:28:40.967 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:28:40.967 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.094 ms 01:28:40.967 01:28:40.967 --- 10.0.0.4 ping statistics --- 01:28:40.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:28:40.967 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 01:28:40.967 05:23:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:28:40.967 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:28:40.967 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.082 ms 01:28:40.967 01:28:40.967 --- 10.0.0.1 ping statistics --- 01:28:40.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:28:40.967 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 01:28:40.967 05:23:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:28:40.967 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:28:40.967 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 01:28:40.967 01:28:40.967 --- 10.0.0.2 ping statistics --- 01:28:40.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:28:40.967 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 01:28:40.967 05:23:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:28:40.967 05:23:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@461 -- # return 0 01:28:40.967 05:23:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:28:40.967 05:23:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:28:40.967 05:23:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:28:40.967 05:23:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:28:40.967 05:23:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:28:40.967 05:23:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:28:40.967 05:23:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:28:40.967 05:23:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 01:28:40.967 05:23:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:28:40.967 05:23:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 01:28:40.967 05:23:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 01:28:40.967 05:23:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # nvmfpid=81898 01:28:40.967 05:23:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 01:28:40.967 05:23:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # waitforlisten 81898 01:28:40.967 05:23:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 81898 ']' 01:28:40.967 05:23:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:28:40.967 05:23:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 01:28:40.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:28:40.967 05:23:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:28:40.967 05:23:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 01:28:40.967 05:23:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 01:28:41.225 [2024-12-09 05:23:23.443501] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:28:41.225 [2024-12-09 05:23:23.443553] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:28:41.225 [2024-12-09 05:23:23.582912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 01:28:41.225 [2024-12-09 05:23:23.635190] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:28:41.225 [2024-12-09 05:23:23.635240] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:28:41.226 [2024-12-09 05:23:23.635246] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:28:41.226 [2024-12-09 05:23:23.635251] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:28:41.226 [2024-12-09 05:23:23.635255] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:28:41.226 [2024-12-09 05:23:23.636153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:28:41.226 [2024-12-09 05:23:23.636157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:28:41.226 [2024-12-09 05:23:23.677134] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:28:42.163 05:23:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:28:42.163 05:23:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 01:28:42.163 05:23:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:28:42.163 05:23:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 01:28:42.163 05:23:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 01:28:42.163 05:23:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:28:42.163 05:23:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:28:42.163 05:23:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 01:28:42.163 [2024-12-09 05:23:24.539746] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:28:42.163 05:23:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 01:28:42.422 Malloc0 01:28:42.422 05:23:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:28:42.681 05:23:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:28:42.940 05:23:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:28:42.940 [2024-12-09 05:23:25.356374] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:28:42.940 05:23:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 01:28:42.940 05:23:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=81947 01:28:42.940 05:23:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 81947 /var/tmp/bdevperf.sock 01:28:42.940 05:23:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 81947 ']' 01:28:42.940 05:23:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:28:42.940 05:23:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 01:28:42.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:28:42.940 05:23:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:28:42.940 05:23:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 01:28:42.940 05:23:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 01:28:43.199 [2024-12-09 05:23:25.402781] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:28:43.199 [2024-12-09 05:23:25.402840] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81947 ] 01:28:43.199 [2024-12-09 05:23:25.554589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:28:43.199 [2024-12-09 05:23:25.607590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:28:43.199 [2024-12-09 05:23:25.648331] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:28:44.136 05:23:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:28:44.136 05:23:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 01:28:44.136 05:23:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 01:28:44.136 05:23:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 01:28:44.395 NVMe0n1 01:28:44.395 05:23:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:28:44.395 05:23:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=81965 01:28:44.395 05:23:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 01:28:44.654 Running I/O for 10 seconds... 01:28:45.592 05:23:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:28:45.592 8098.00 IOPS, 31.63 MiB/s [2024-12-09T05:23:28.048Z] [2024-12-09 05:23:27.990293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:72368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:45.592 [2024-12-09 05:23:27.990825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.592 [2024-12-09 05:23:27.990907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:72376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:45.592 [2024-12-09 05:23:27.990953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.592 [2024-12-09 05:23:27.990989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:45.592 [2024-12-09 05:23:27.991033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.592 [2024-12-09 05:23:27.991068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:72392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:45.592 [2024-12-09 05:23:27.991114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.592 [2024-12-09 05:23:27.991152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:72400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:45.592 [2024-12-09 05:23:27.991198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.592 [2024-12-09 05:23:27.991233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:72408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:45.592 [2024-12-09 05:23:27.991279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.592 [2024-12-09 05:23:27.991316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:45.592 [2024-12-09 05:23:27.991375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.592 [2024-12-09 05:23:27.991411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:72424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:45.593 [2024-12-09 05:23:27.991454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.593 [2024-12-09 05:23:27.991489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:72432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:45.593 [2024-12-09 05:23:27.991525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.593 [2024-12-09 05:23:27.991559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:72440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:45.593 [2024-12-09 05:23:27.991597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.593 [2024-12-09 05:23:27.991631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:72448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:45.593 [2024-12-09 05:23:27.991672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.593 [2024-12-09 05:23:27.991714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:72456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:45.593 [2024-12-09 05:23:27.991751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.593 [2024-12-09 05:23:27.991785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:72464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:45.593 [2024-12-09 05:23:27.991825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.593 [2024-12-09 05:23:27.991859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:72472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:45.593 [2024-12-09 05:23:27.991899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.593 [2024-12-09 05:23:27.991934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:72480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:45.593 [2024-12-09 05:23:27.991972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.593 [2024-12-09 05:23:27.992007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:72488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:45.593 [2024-12-09 05:23:27.992048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.593 [2024-12-09 05:23:27.992084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:72496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:45.593 [2024-12-09 05:23:27.992119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.593 [2024-12-09 05:23:27.992153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:72504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:45.593 [2024-12-09 05:23:27.992192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.593 [2024-12-09 05:23:27.992229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:72512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:45.593 [2024-12-09 05:23:27.992268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.593 [2024-12-09 05:23:27.992302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:72520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:45.593 [2024-12-09 05:23:27.992356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.593 [2024-12-09 05:23:27.992393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:72528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:45.593 [2024-12-09 05:23:27.992435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.593 [2024-12-09 05:23:27.992472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:72536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:45.593 [2024-12-09 05:23:27.992512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.593 [2024-12-09 05:23:27.992548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:72544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:45.593 [2024-12-09 05:23:27.992586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.593 [2024-12-09 05:23:27.992620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:72552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:45.593 [2024-12-09 05:23:27.992660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.593 [2024-12-09 05:23:27.992697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:72560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:45.593 [2024-12-09 05:23:27.992733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.593 [2024-12-09 05:23:27.992770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:72568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:45.593 [2024-12-09 05:23:27.992808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.593 [2024-12-09 05:23:27.992847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:72576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:45.593 [2024-12-09 05:23:27.992887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.593 [2024-12-09 05:23:27.992921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:72584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:45.593 [2024-12-09 05:23:27.992962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.593 [2024-12-09 05:23:27.992996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:72592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:45.593 [2024-12-09 05:23:27.993036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.593 [2024-12-09 05:23:27.993067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:72600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:45.593 [2024-12-09 05:23:27.993104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.593 [2024-12-09 05:23:27.993141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:72608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:45.593 [2024-12-09 05:23:27.993177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.593 [2024-12-09 05:23:27.993213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:72616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:45.593 [2024-12-09 05:23:27.993254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.593 [2024-12-09 05:23:27.993289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:72624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:45.593 [2024-12-09 05:23:27.993335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.593 [2024-12-09 05:23:27.993377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:72632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:45.593 [2024-12-09 05:23:27.993417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.593 [2024-12-09 05:23:27.993452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:72640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:45.593 [2024-12-09 05:23:27.993487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.593 [2024-12-09 05:23:27.993517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:72648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:45.593 [2024-12-09 05:23:27.993550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.593 [2024-12-09 05:23:27.993581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:72656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:45.593 [2024-12-09 05:23:27.993625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.593 [2024-12-09 05:23:27.993659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:72664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:45.593 [2024-12-09 05:23:27.993694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.593 [2024-12-09 05:23:27.993728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:72672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:45.593 [2024-12-09 05:23:27.993766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.593 [2024-12-09 05:23:27.993800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:72680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:45.593 [2024-12-09 05:23:27.993838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.593 [2024-12-09 05:23:27.993875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:72688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:45.593 [2024-12-09 05:23:27.993911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.593 [2024-12-09 05:23:27.993945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:72696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:45.593 [2024-12-09 05:23:27.993985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.593 [2024-12-09 05:23:27.994019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:72704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:45.593 [2024-12-09 05:23:27.994059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.593 [2024-12-09 05:23:27.994089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:72712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:45.593 [2024-12-09 05:23:27.994133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.593 [2024-12-09 05:23:27.994167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:72720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:45.593 [2024-12-09 05:23:27.994205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.593 [2024-12-09 05:23:27.994239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:72728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:45.593 [2024-12-09 05:23:27.994277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.593 [2024-12-09 05:23:27.994311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:72736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:45.593 [2024-12-09 05:23:27.994365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.594 [2024-12-09 05:23:27.994401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:72744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:45.594 [2024-12-09 05:23:27.994443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.594 [2024-12-09 05:23:27.994477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:72752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:45.594 [2024-12-09 05:23:27.994516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.594 [2024-12-09 05:23:27.994550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:72760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:45.594 [2024-12-09 05:23:27.994585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.594 [2024-12-09 05:23:27.994595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:72768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:45.594 [2024-12-09 05:23:27.994602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.594 [2024-12-09 05:23:27.994609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:72776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:45.594 [2024-12-09 05:23:27.994614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.594 [2024-12-09 05:23:27.994621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:72784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:45.594 [2024-12-09 05:23:27.994627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.594 [2024-12-09 05:23:27.994634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:72792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:45.594 [2024-12-09 05:23:27.994639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.594 [2024-12-09 05:23:27.994646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:72800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:45.594 [2024-12-09 05:23:27.994651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.594 [2024-12-09 05:23:27.994658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:72808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:45.594 [2024-12-09 05:23:27.994663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.594 [2024-12-09 05:23:27.994670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:72816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:45.594 [2024-12-09 05:23:27.994675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.594 [2024-12-09 05:23:27.994682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:72824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:45.594 [2024-12-09 05:23:27.994686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.594 [2024-12-09 05:23:27.994693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:72832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:45.594 [2024-12-09 05:23:27.994698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.594 [2024-12-09 05:23:27.994705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:72840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:45.594 [2024-12-09 05:23:27.994710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.594 [2024-12-09 05:23:27.994717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:72848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:45.594 [2024-12-09 05:23:27.994722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.594 [2024-12-09 05:23:27.994729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:72856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:45.594 [2024-12-09 05:23:27.994734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.594 [2024-12-09 05:23:27.994740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:72864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:45.594 [2024-12-09 05:23:27.994745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.594 [2024-12-09 05:23:27.994752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:72872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:45.594 [2024-12-09 05:23:27.994758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.594 [2024-12-09 05:23:27.994765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:72880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:45.594 [2024-12-09 05:23:27.994770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.594 [2024-12-09 05:23:27.994776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:72888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:45.594 [2024-12-09 05:23:27.994781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.594 [2024-12-09 05:23:27.994790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:72896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:45.594 [2024-12-09 05:23:27.994795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.594 [2024-12-09 05:23:27.994802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:72904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:45.594 [2024-12-09 05:23:27.994808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.594 [2024-12-09 05:23:27.994815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:72912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:45.594 [2024-12-09 05:23:27.994820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.594 [2024-12-09 05:23:27.994827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:72920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:45.594 [2024-12-09 05:23:27.994832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.594 [2024-12-09 05:23:27.994838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:72928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:45.594 [2024-12-09 05:23:27.994844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.594 [2024-12-09 05:23:27.994851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:71960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:45.594 [2024-12-09 05:23:27.994857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.594 [2024-12-09 05:23:27.994864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:71968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:45.594 [2024-12-09 05:23:27.994869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.594 [2024-12-09 05:23:27.994875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:71976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:45.594 [2024-12-09 05:23:27.994880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.594 [2024-12-09 05:23:27.994887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:71984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:45.594 [2024-12-09 05:23:27.994892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.594 [2024-12-09 05:23:27.994899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:71992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:45.594 [2024-12-09 05:23:27.994904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.594 [2024-12-09 05:23:27.994910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:72000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:45.594 [2024-12-09 05:23:27.994916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.594 [2024-12-09 05:23:27.994922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:72008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:45.594 [2024-12-09 05:23:27.994927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.594 [2024-12-09 05:23:27.994934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:72936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:45.594 [2024-12-09 05:23:27.994940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.594 [2024-12-09 05:23:27.994946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:72016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:45.594 [2024-12-09 05:23:27.994952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.594 [2024-12-09 05:23:27.994959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:72024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:45.594 [2024-12-09 05:23:27.994964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.594 [2024-12-09 05:23:27.994970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:72032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:45.594 [2024-12-09 05:23:27.994975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.594 [2024-12-09 05:23:27.994983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:72040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:45.594 [2024-12-09 05:23:27.994989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.594 [2024-12-09 05:23:27.994995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:72048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:45.594 [2024-12-09 05:23:27.995001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.594 [2024-12-09 05:23:27.995008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:72056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:45.594 [2024-12-09 05:23:27.995013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.594 [2024-12-09 05:23:27.995020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:72064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:45.594 [2024-12-09 05:23:27.995025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.594 [2024-12-09 05:23:27.995032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:72072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:45.594 [2024-12-09 05:23:27.995037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.595 [2024-12-09 05:23:27.995044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:72080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:45.595 [2024-12-09 05:23:27.995049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.595 [2024-12-09 05:23:27.995056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:72088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:45.595 [2024-12-09 05:23:27.995061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.595 [2024-12-09 05:23:27.995067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:72096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:45.595 [2024-12-09 05:23:27.995073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.595 [2024-12-09 05:23:27.995080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:72104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:45.595 [2024-12-09 05:23:27.995085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.595 [2024-12-09 05:23:27.995092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:72112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:45.595 [2024-12-09 05:23:27.995097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.595 [2024-12-09 05:23:27.995104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:72120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:45.595 [2024-12-09 05:23:27.995109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.595 [2024-12-09 05:23:27.995116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:72128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:45.595 [2024-12-09 05:23:27.995121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.595 [2024-12-09 05:23:27.995128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:72136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:45.595 [2024-12-09 05:23:27.995133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.595 [2024-12-09 05:23:27.995139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:72144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:45.595 [2024-12-09 05:23:27.995144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.595 [2024-12-09 05:23:27.995151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:72152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:45.595 [2024-12-09 05:23:27.995156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.595 [2024-12-09 05:23:27.995163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:72160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:45.595 [2024-12-09 05:23:27.995168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.595 [2024-12-09 05:23:27.995176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:72168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:45.595 [2024-12-09 05:23:27.995181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.595 [2024-12-09 05:23:27.995187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:72176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:45.595 [2024-12-09 05:23:27.995193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.595 [2024-12-09 05:23:27.995199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:72944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:45.595 [2024-12-09 05:23:27.995205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.595 [2024-12-09 05:23:27.995211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:72952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:45.595 [2024-12-09 05:23:27.995216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.595 [2024-12-09 05:23:27.995223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:72960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:45.595 [2024-12-09 05:23:27.995228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.595 [2024-12-09 05:23:27.995235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:72968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:45.595 [2024-12-09 05:23:27.995240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.595 [2024-12-09 05:23:27.995247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:72184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:45.595 [2024-12-09 05:23:27.995252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.595 [2024-12-09 05:23:27.995259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:72192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:45.595 [2024-12-09 05:23:27.995264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.595 [2024-12-09 05:23:27.995280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:72200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:45.595 [2024-12-09 05:23:27.995286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.595 [2024-12-09 05:23:27.995293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:72208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:45.595 [2024-12-09 05:23:27.995298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.595 [2024-12-09 05:23:27.995305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:72216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:45.595 [2024-12-09 05:23:27.995310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.595 [2024-12-09 05:23:27.995317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:72224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:45.595 [2024-12-09 05:23:27.995329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.595 [2024-12-09 05:23:27.995337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:72232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:45.595 [2024-12-09 05:23:27.995342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.595 [2024-12-09 05:23:27.995349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:72976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:45.595 [2024-12-09 05:23:27.995354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.595 [2024-12-09 05:23:27.995361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:72240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:45.595 [2024-12-09 05:23:27.995366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.595 [2024-12-09 05:23:27.995373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:72248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:45.595 [2024-12-09 05:23:27.995378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.595 [2024-12-09 05:23:27.995385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:72256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:45.595 [2024-12-09 05:23:27.995391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.595 [2024-12-09 05:23:27.995398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:72264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:45.595 [2024-12-09 05:23:27.995404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.595 [2024-12-09 05:23:27.995411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:72272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:45.595 [2024-12-09 05:23:27.995416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.595 [2024-12-09 05:23:27.995423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:72280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:45.595 [2024-12-09 05:23:27.995428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.595 [2024-12-09 05:23:27.995434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:72288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:45.595 [2024-12-09 05:23:27.995440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.595 [2024-12-09 05:23:27.995446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:72296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:45.595 [2024-12-09 05:23:27.995452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.595 [2024-12-09 05:23:27.995458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:72304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:45.595 [2024-12-09 05:23:27.995463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.595 [2024-12-09 05:23:27.995470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:72312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:45.595 [2024-12-09 05:23:27.995475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.595 [2024-12-09 05:23:27.995481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:72320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:45.595 [2024-12-09 05:23:27.995487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.595 [2024-12-09 05:23:27.995494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:72328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:45.595 [2024-12-09 05:23:27.995499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.595 [2024-12-09 05:23:27.995506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:72336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:45.595 [2024-12-09 05:23:27.995511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.595 [2024-12-09 05:23:27.995517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:72344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:45.595 [2024-12-09 05:23:27.995522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.595 [2024-12-09 05:23:27.995529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:72352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:45.595 [2024-12-09 05:23:27.995534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.596 [2024-12-09 05:23:27.995541] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848970 is same with the state(6) to be set 01:28:45.596 [2024-12-09 05:23:27.995549] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:28:45.596 [2024-12-09 05:23:27.995553] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:28:45.596 [2024-12-09 05:23:27.995558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72360 len:8 PRP1 0x0 PRP2 0x0 01:28:45.596 [2024-12-09 05:23:27.995564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:45.596 [2024-12-09 05:23:27.995817] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 01:28:45.596 [2024-12-09 05:23:27.995894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e8e50 (9): Bad file descriptor 01:28:45.596 [2024-12-09 05:23:27.995965] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 01:28:45.596 [2024-12-09 05:23:27.995976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e8e50 with addr=10.0.0.3, port=4420 01:28:45.596 [2024-12-09 05:23:27.995983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e8e50 is same with the state(6) to be set 01:28:45.596 [2024-12-09 05:23:27.995993] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e8e50 (9): Bad file descriptor 01:28:45.596 [2024-12-09 05:23:27.996003] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 01:28:45.596 [2024-12-09 05:23:27.996008] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 01:28:45.596 [2024-12-09 05:23:27.996015] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 01:28:45.596 [2024-12-09 05:23:27.996021] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 01:28:45.596 [2024-12-09 05:23:27.996028] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 01:28:45.596 05:23:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 01:28:47.470 4497.50 IOPS, 17.57 MiB/s [2024-12-09T05:23:30.186Z] 2998.33 IOPS, 11.71 MiB/s [2024-12-09T05:23:30.186Z] [2024-12-09 05:23:29.992435] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 01:28:47.730 [2024-12-09 05:23:29.992492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e8e50 with addr=10.0.0.3, port=4420 01:28:47.730 [2024-12-09 05:23:29.992503] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e8e50 is same with the state(6) to be set 01:28:47.730 [2024-12-09 05:23:29.992529] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e8e50 (9): Bad file descriptor 01:28:47.730 [2024-12-09 05:23:29.992542] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 01:28:47.730 [2024-12-09 05:23:29.992548] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 01:28:47.730 [2024-12-09 05:23:29.992556] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 01:28:47.730 [2024-12-09 05:23:29.992563] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 01:28:47.730 [2024-12-09 05:23:29.992571] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 01:28:47.730 05:23:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 01:28:47.730 05:23:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 01:28:47.730 05:23:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 01:28:47.988 05:23:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 01:28:47.988 05:23:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 01:28:47.988 05:23:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 01:28:47.988 05:23:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 01:28:47.988 05:23:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 01:28:47.988 05:23:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 01:28:49.508 2248.75 IOPS, 8.78 MiB/s [2024-12-09T05:23:32.223Z] 1799.00 IOPS, 7.03 MiB/s [2024-12-09T05:23:32.223Z] [2024-12-09 05:23:31.988942] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 01:28:49.767 [2024-12-09 05:23:31.988991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e8e50 with addr=10.0.0.3, port=4420 01:28:49.767 [2024-12-09 05:23:31.989002] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e8e50 is same with the state(6) to be set 01:28:49.767 [2024-12-09 05:23:31.989020] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e8e50 (9): Bad file descriptor 01:28:49.767 [2024-12-09 05:23:31.989033] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 01:28:49.767 [2024-12-09 05:23:31.989039] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 01:28:49.767 [2024-12-09 05:23:31.989046] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 01:28:49.767 [2024-12-09 05:23:31.989054] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 01:28:49.767 [2024-12-09 05:23:31.989061] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 01:28:51.643 1499.17 IOPS, 5.86 MiB/s [2024-12-09T05:23:34.099Z] 1285.00 IOPS, 5.02 MiB/s [2024-12-09T05:23:34.099Z] [2024-12-09 05:23:33.985371] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 01:28:51.643 [2024-12-09 05:23:33.985408] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 01:28:51.643 [2024-12-09 05:23:33.985416] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 01:28:51.643 [2024-12-09 05:23:33.985424] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 01:28:51.643 [2024-12-09 05:23:33.985434] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 01:28:52.581 1124.38 IOPS, 4.39 MiB/s 01:28:52.581 Latency(us) 01:28:52.581 [2024-12-09T05:23:35.037Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:28:52.581 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 01:28:52.581 Verification LBA range: start 0x0 length 0x4000 01:28:52.581 NVMe0n1 : 8.12 1107.54 4.33 15.76 0.00 114032.59 2547.03 7033243.39 01:28:52.581 [2024-12-09T05:23:35.037Z] =================================================================================================================== 01:28:52.581 [2024-12-09T05:23:35.037Z] Total : 1107.54 4.33 15.76 0.00 114032.59 2547.03 7033243.39 01:28:52.581 { 01:28:52.581 "results": [ 01:28:52.581 { 01:28:52.581 "job": "NVMe0n1", 01:28:52.581 "core_mask": "0x4", 01:28:52.581 "workload": "verify", 01:28:52.581 "status": "finished", 01:28:52.581 "verify_range": { 01:28:52.581 "start": 0, 01:28:52.581 "length": 16384 01:28:52.581 }, 01:28:52.581 "queue_depth": 128, 01:28:52.581 "io_size": 4096, 01:28:52.581 "runtime": 8.121601, 01:28:52.581 "iops": 1107.5402497611, 01:28:52.581 "mibps": 4.326329100629297, 01:28:52.581 "io_failed": 128, 01:28:52.581 "io_timeout": 0, 01:28:52.581 "avg_latency_us": 114032.59122051996, 01:28:52.581 "min_latency_us": 2547.0323144104805, 01:28:52.581 "max_latency_us": 7033243.388646288 01:28:52.581 } 01:28:52.581 ], 01:28:52.581 "core_count": 1 01:28:52.581 } 01:28:53.150 05:23:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 01:28:53.150 05:23:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 01:28:53.150 05:23:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 01:28:53.407 05:23:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 01:28:53.407 05:23:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 01:28:53.407 05:23:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 01:28:53.407 05:23:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 01:28:53.665 05:23:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 01:28:53.666 05:23:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 81965 01:28:53.666 05:23:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 81947 01:28:53.666 05:23:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 81947 ']' 01:28:53.666 05:23:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 81947 01:28:53.666 05:23:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 01:28:53.666 05:23:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:28:53.666 05:23:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81947 01:28:53.666 05:23:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 01:28:53.666 05:23:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 01:28:53.666 killing process with pid 81947 01:28:53.666 05:23:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81947' 01:28:53.666 05:23:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 81947 01:28:53.666 Received shutdown signal, test time was about 9.121663 seconds 01:28:53.666 01:28:53.666 Latency(us) 01:28:53.666 [2024-12-09T05:23:36.122Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:28:53.666 [2024-12-09T05:23:36.122Z] =================================================================================================================== 01:28:53.666 [2024-12-09T05:23:36.122Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:28:53.666 05:23:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 81947 01:28:53.926 05:23:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:28:53.926 [2024-12-09 05:23:36.344474] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:28:53.926 05:23:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=82092 01:28:53.926 05:23:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 01:28:53.926 05:23:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 82092 /var/tmp/bdevperf.sock 01:28:53.926 05:23:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82092 ']' 01:28:53.926 05:23:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:28:53.926 05:23:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 01:28:53.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:28:53.926 05:23:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:28:53.926 05:23:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 01:28:53.926 05:23:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 01:28:54.185 [2024-12-09 05:23:36.415132] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:28:54.185 [2024-12-09 05:23:36.415215] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82092 ] 01:28:54.185 [2024-12-09 05:23:36.543903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:28:54.185 [2024-12-09 05:23:36.597767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:28:54.185 [2024-12-09 05:23:36.638376] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:28:55.118 05:23:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:28:55.118 05:23:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 01:28:55.118 05:23:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 01:28:55.118 05:23:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 01:28:55.377 NVMe0n1 01:28:55.377 05:23:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=82111 01:28:55.377 05:23:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 01:28:55.377 05:23:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:28:55.635 Running I/O for 10 seconds... 01:28:56.578 05:23:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:28:56.578 7657.00 IOPS, 29.91 MiB/s [2024-12-09T05:23:39.034Z] [2024-12-09 05:23:38.973508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:68920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:56.578 [2024-12-09 05:23:38.973570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.578 [2024-12-09 05:23:38.973587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:68928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:56.578 [2024-12-09 05:23:38.973594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.578 [2024-12-09 05:23:38.973602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:68936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:56.578 [2024-12-09 05:23:38.973608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.578 [2024-12-09 05:23:38.973615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:68944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:56.578 [2024-12-09 05:23:38.973621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.578 [2024-12-09 05:23:38.973629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:68952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:56.578 [2024-12-09 05:23:38.973634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.578 [2024-12-09 05:23:38.973641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:68960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:56.578 [2024-12-09 05:23:38.973649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.578 [2024-12-09 05:23:38.973656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:68968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:56.578 [2024-12-09 05:23:38.973662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.578 [2024-12-09 05:23:38.973669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:68976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:56.578 [2024-12-09 05:23:38.973674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.578 [2024-12-09 05:23:38.973681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:68984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:56.578 [2024-12-09 05:23:38.973686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.578 [2024-12-09 05:23:38.973692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:68992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:56.578 [2024-12-09 05:23:38.973698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.578 [2024-12-09 05:23:38.973705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:69000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:56.578 [2024-12-09 05:23:38.973710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.578 [2024-12-09 05:23:38.973717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:69008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:56.578 [2024-12-09 05:23:38.973723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.578 [2024-12-09 05:23:38.973730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:69016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:56.578 [2024-12-09 05:23:38.973735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.578 [2024-12-09 05:23:38.973742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:68488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:56.578 [2024-12-09 05:23:38.973747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.578 [2024-12-09 05:23:38.973754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:68496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:56.578 [2024-12-09 05:23:38.973759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.578 [2024-12-09 05:23:38.973766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:68504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:56.578 [2024-12-09 05:23:38.973771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.579 [2024-12-09 05:23:38.973778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:68512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:56.579 [2024-12-09 05:23:38.973783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.579 [2024-12-09 05:23:38.973791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:68520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:56.579 [2024-12-09 05:23:38.973797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.579 [2024-12-09 05:23:38.973804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:68528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:56.579 [2024-12-09 05:23:38.973809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.579 [2024-12-09 05:23:38.973816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:68536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:56.579 [2024-12-09 05:23:38.973821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.579 [2024-12-09 05:23:38.973828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:69024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:56.579 [2024-12-09 05:23:38.973833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.579 [2024-12-09 05:23:38.973840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:69032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:56.579 [2024-12-09 05:23:38.973845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.579 [2024-12-09 05:23:38.973852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:69040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:56.579 [2024-12-09 05:23:38.973857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.579 [2024-12-09 05:23:38.973864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:69048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:56.579 [2024-12-09 05:23:38.973869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.579 [2024-12-09 05:23:38.973875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:69056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:56.579 [2024-12-09 05:23:38.973880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.579 [2024-12-09 05:23:38.973887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:69064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:56.579 [2024-12-09 05:23:38.973893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.579 [2024-12-09 05:23:38.973900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:69072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:56.579 [2024-12-09 05:23:38.973905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.579 [2024-12-09 05:23:38.973912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:69080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:56.579 [2024-12-09 05:23:38.973918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.579 [2024-12-09 05:23:38.973925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:69088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:56.579 [2024-12-09 05:23:38.973930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.579 [2024-12-09 05:23:38.973936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:69096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:56.579 [2024-12-09 05:23:38.973941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.579 [2024-12-09 05:23:38.973948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:69104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:56.579 [2024-12-09 05:23:38.973953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.579 [2024-12-09 05:23:38.973960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:69112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:56.579 [2024-12-09 05:23:38.973965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.579 [2024-12-09 05:23:38.973973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:69120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:56.579 [2024-12-09 05:23:38.973979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.579 [2024-12-09 05:23:38.973987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:69128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:56.579 [2024-12-09 05:23:38.973992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.579 [2024-12-09 05:23:38.973999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:69136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:56.579 [2024-12-09 05:23:38.974004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.579 [2024-12-09 05:23:38.974011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:69144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:56.579 [2024-12-09 05:23:38.974016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.579 [2024-12-09 05:23:38.974023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:69152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:56.579 [2024-12-09 05:23:38.974028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.579 [2024-12-09 05:23:38.974035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:69160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:56.579 [2024-12-09 05:23:38.974040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.579 [2024-12-09 05:23:38.974047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:69168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:56.579 [2024-12-09 05:23:38.974052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.579 [2024-12-09 05:23:38.974059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:69176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:56.579 [2024-12-09 05:23:38.974064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.579 [2024-12-09 05:23:38.974071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:69184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:56.579 [2024-12-09 05:23:38.974076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.579 [2024-12-09 05:23:38.974082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:69192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:56.579 [2024-12-09 05:23:38.974087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.579 [2024-12-09 05:23:38.974094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:69200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:56.579 [2024-12-09 05:23:38.974099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.579 [2024-12-09 05:23:38.974106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:69208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:56.579 [2024-12-09 05:23:38.974111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.579 [2024-12-09 05:23:38.974118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:69216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:56.579 [2024-12-09 05:23:38.974123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.579 [2024-12-09 05:23:38.974129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:69224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:56.579 [2024-12-09 05:23:38.974134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.579 [2024-12-09 05:23:38.974141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:69232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:56.579 [2024-12-09 05:23:38.974146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.579 [2024-12-09 05:23:38.974153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:69240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:56.579 [2024-12-09 05:23:38.974158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.579 [2024-12-09 05:23:38.974165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:69248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:56.579 [2024-12-09 05:23:38.974170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.579 [2024-12-09 05:23:38.974178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:69256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:56.579 [2024-12-09 05:23:38.974183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.579 [2024-12-09 05:23:38.974190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:69264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:56.579 [2024-12-09 05:23:38.974195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.579 [2024-12-09 05:23:38.974202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:69272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:56.579 [2024-12-09 05:23:38.974207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.579 [2024-12-09 05:23:38.974214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:69280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:56.579 [2024-12-09 05:23:38.974219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.579 [2024-12-09 05:23:38.974226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:69288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:56.579 [2024-12-09 05:23:38.974231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.579 [2024-12-09 05:23:38.974238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:69296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:56.579 [2024-12-09 05:23:38.974243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.579 [2024-12-09 05:23:38.974249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:69304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:56.579 [2024-12-09 05:23:38.974254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.579 [2024-12-09 05:23:38.974261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:69312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:56.579 [2024-12-09 05:23:38.974266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.579 [2024-12-09 05:23:38.974274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:69320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:56.579 [2024-12-09 05:23:38.974279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.579 [2024-12-09 05:23:38.974285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:69328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:56.579 [2024-12-09 05:23:38.974290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.579 [2024-12-09 05:23:38.974297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:69336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:56.579 [2024-12-09 05:23:38.974302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.579 [2024-12-09 05:23:38.974309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:69344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:56.579 [2024-12-09 05:23:38.974314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.579 [2024-12-09 05:23:38.974320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:69352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:56.579 [2024-12-09 05:23:38.974336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.579 [2024-12-09 05:23:38.974343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:69360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:56.579 [2024-12-09 05:23:38.974348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.579 [2024-12-09 05:23:38.974355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:69368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:56.579 [2024-12-09 05:23:38.974360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.579 [2024-12-09 05:23:38.974367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:69376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:56.579 [2024-12-09 05:23:38.974373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.579 [2024-12-09 05:23:38.974382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:69384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:56.579 [2024-12-09 05:23:38.974389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.579 [2024-12-09 05:23:38.974395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:69392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:56.579 [2024-12-09 05:23:38.974400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.579 [2024-12-09 05:23:38.974408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:69400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:56.579 [2024-12-09 05:23:38.974413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.579 [2024-12-09 05:23:38.974419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:69408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:56.579 [2024-12-09 05:23:38.974425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.579 [2024-12-09 05:23:38.974432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:69416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:56.579 [2024-12-09 05:23:38.974437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.579 [2024-12-09 05:23:38.974443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:56.579 [2024-12-09 05:23:38.974449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.579 [2024-12-09 05:23:38.974456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:69432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:56.579 [2024-12-09 05:23:38.974461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.579 [2024-12-09 05:23:38.974467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:69440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:56.579 [2024-12-09 05:23:38.974473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.579 [2024-12-09 05:23:38.974479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:69448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:56.579 [2024-12-09 05:23:38.974485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.579 [2024-12-09 05:23:38.974491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:69456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:56.579 [2024-12-09 05:23:38.974496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.579 [2024-12-09 05:23:38.974503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:69464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:56.579 [2024-12-09 05:23:38.974508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.579 [2024-12-09 05:23:38.974515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:69472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:56.579 [2024-12-09 05:23:38.974520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.579 [2024-12-09 05:23:38.974527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:69480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:56.579 [2024-12-09 05:23:38.974532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.579 [2024-12-09 05:23:38.974538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:69488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:56.579 [2024-12-09 05:23:38.974543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.579 [2024-12-09 05:23:38.974550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:69496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:56.579 [2024-12-09 05:23:38.974555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.579 [2024-12-09 05:23:38.974561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:68544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:56.579 [2024-12-09 05:23:38.974567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.579 [2024-12-09 05:23:38.974574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:68552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:56.579 [2024-12-09 05:23:38.974579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.579 [2024-12-09 05:23:38.974586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:68560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:56.579 [2024-12-09 05:23:38.974591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.579 [2024-12-09 05:23:38.974598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:68568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:56.579 [2024-12-09 05:23:38.974603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.579 [2024-12-09 05:23:38.974609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:68576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:56.579 [2024-12-09 05:23:38.974615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.580 [2024-12-09 05:23:38.974622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:68584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:56.580 [2024-12-09 05:23:38.974627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.580 [2024-12-09 05:23:38.974634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:68592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:56.580 [2024-12-09 05:23:38.974639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.580 [2024-12-09 05:23:38.974645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:69504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:28:56.580 [2024-12-09 05:23:38.974650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.580 [2024-12-09 05:23:38.974657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:68600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:56.580 [2024-12-09 05:23:38.974663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.580 [2024-12-09 05:23:38.974669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:68608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:56.580 [2024-12-09 05:23:38.974675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.580 [2024-12-09 05:23:38.974681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:68616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:56.580 [2024-12-09 05:23:38.974687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.580 [2024-12-09 05:23:38.974694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:68624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:56.580 [2024-12-09 05:23:38.974699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.580 [2024-12-09 05:23:38.974705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:68632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:56.580 [2024-12-09 05:23:38.974711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.580 [2024-12-09 05:23:38.974718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:68640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:56.580 [2024-12-09 05:23:38.974723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.580 [2024-12-09 05:23:38.974730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:68648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:56.580 [2024-12-09 05:23:38.974735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.580 [2024-12-09 05:23:38.974742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:68656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:56.580 [2024-12-09 05:23:38.974747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.580 [2024-12-09 05:23:38.974754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:68664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:56.580 [2024-12-09 05:23:38.974763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.580 [2024-12-09 05:23:38.974770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:68672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:56.580 [2024-12-09 05:23:38.974776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.580 [2024-12-09 05:23:38.974783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:68680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:56.580 [2024-12-09 05:23:38.974788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.580 [2024-12-09 05:23:38.974795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:68688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:56.580 [2024-12-09 05:23:38.974800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.580 [2024-12-09 05:23:38.974806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:68696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:56.580 [2024-12-09 05:23:38.974812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.580 [2024-12-09 05:23:38.974819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:68704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:56.580 [2024-12-09 05:23:38.974824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.580 [2024-12-09 05:23:38.974831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:68712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:56.580 [2024-12-09 05:23:38.974836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.580 [2024-12-09 05:23:38.974843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:68720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:56.580 [2024-12-09 05:23:38.974848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.580 [2024-12-09 05:23:38.974855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:68728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:56.580 [2024-12-09 05:23:38.974861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.580 [2024-12-09 05:23:38.974868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:68736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:56.580 [2024-12-09 05:23:38.974873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.580 [2024-12-09 05:23:38.974881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:68744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:56.580 [2024-12-09 05:23:38.974886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.580 [2024-12-09 05:23:38.974893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:68752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:56.580 [2024-12-09 05:23:38.974898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.580 [2024-12-09 05:23:38.974905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:68760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:56.580 [2024-12-09 05:23:38.974911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.580 [2024-12-09 05:23:38.974918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:68768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:56.580 [2024-12-09 05:23:38.974923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.580 [2024-12-09 05:23:38.974930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:68776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:56.580 [2024-12-09 05:23:38.974935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.580 [2024-12-09 05:23:38.974942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:68784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:56.580 [2024-12-09 05:23:38.974947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.580 [2024-12-09 05:23:38.974953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:68792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:56.580 [2024-12-09 05:23:38.974961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.580 [2024-12-09 05:23:38.974967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:68800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:56.580 [2024-12-09 05:23:38.974973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.580 [2024-12-09 05:23:38.974980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:68808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:56.580 [2024-12-09 05:23:38.974986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.580 [2024-12-09 05:23:38.974992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:68816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:56.580 [2024-12-09 05:23:38.974997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.580 [2024-12-09 05:23:38.975004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:68824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:56.580 [2024-12-09 05:23:38.975009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.580 [2024-12-09 05:23:38.975016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:68832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:56.580 [2024-12-09 05:23:38.975023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.580 [2024-12-09 05:23:38.975030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:68840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:56.580 [2024-12-09 05:23:38.975035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.580 [2024-12-09 05:23:38.975042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:68848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:56.580 [2024-12-09 05:23:38.975047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.580 [2024-12-09 05:23:38.975054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:68856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:56.580 [2024-12-09 05:23:38.975059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.580 [2024-12-09 05:23:38.975066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:68864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:56.580 [2024-12-09 05:23:38.975071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.580 [2024-12-09 05:23:38.975078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:68872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:56.580 [2024-12-09 05:23:38.975083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.580 [2024-12-09 05:23:38.975090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:68880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:56.580 [2024-12-09 05:23:38.975096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.580 [2024-12-09 05:23:38.975103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:68888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:56.580 [2024-12-09 05:23:38.975108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.580 [2024-12-09 05:23:38.975115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:68896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:56.580 [2024-12-09 05:23:38.975120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.580 [2024-12-09 05:23:38.975126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:68904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:28:56.580 [2024-12-09 05:23:38.975131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.580 [2024-12-09 05:23:38.975138] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6d970 is same with the state(6) to be set 01:28:56.580 [2024-12-09 05:23:38.975145] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:28:56.580 [2024-12-09 05:23:38.975149] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:28:56.580 [2024-12-09 05:23:38.975156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:68912 len:8 PRP1 0x0 PRP2 0x0 01:28:56.580 [2024-12-09 05:23:38.975161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:28:56.580 [2024-12-09 05:23:38.975448] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:28:56.580 [2024-12-09 05:23:38.975534] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0de50 (9): Bad file descriptor 01:28:56.580 [2024-12-09 05:23:38.975612] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 01:28:56.580 [2024-12-09 05:23:38.975630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0de50 with addr=10.0.0.3, port=4420 01:28:56.580 [2024-12-09 05:23:38.975637] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0de50 is same with the state(6) to be set 01:28:56.580 [2024-12-09 05:23:38.975649] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0de50 (9): Bad file descriptor 01:28:56.580 [2024-12-09 05:23:38.975660] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:28:56.580 [2024-12-09 05:23:38.975667] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:28:56.580 [2024-12-09 05:23:38.975675] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:28:56.580 [2024-12-09 05:23:38.975682] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:28:56.580 [2024-12-09 05:23:38.975689] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:28:56.580 05:23:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 01:28:57.772 4280.50 IOPS, 16.72 MiB/s [2024-12-09T05:23:40.228Z] [2024-12-09 05:23:39.973871] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 01:28:57.772 [2024-12-09 05:23:39.973912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0de50 with addr=10.0.0.3, port=4420 01:28:57.772 [2024-12-09 05:23:39.973938] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0de50 is same with the state(6) to be set 01:28:57.772 [2024-12-09 05:23:39.973954] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0de50 (9): Bad file descriptor 01:28:57.772 [2024-12-09 05:23:39.973966] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:28:57.772 [2024-12-09 05:23:39.973971] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:28:57.772 [2024-12-09 05:23:39.973978] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:28:57.772 [2024-12-09 05:23:39.973985] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:28:57.772 [2024-12-09 05:23:39.973992] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:28:57.772 05:23:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:28:57.772 [2024-12-09 05:23:40.195998] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:28:58.030 05:23:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 82111 01:28:58.597 2853.67 IOPS, 11.15 MiB/s [2024-12-09T05:23:41.053Z] [2024-12-09 05:23:40.988220] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 01:29:00.468 2140.25 IOPS, 8.36 MiB/s [2024-12-09T05:23:43.862Z] 3231.00 IOPS, 12.62 MiB/s [2024-12-09T05:23:45.245Z] 4128.50 IOPS, 16.13 MiB/s [2024-12-09T05:23:46.180Z] 4782.14 IOPS, 18.68 MiB/s [2024-12-09T05:23:47.115Z] 5267.38 IOPS, 20.58 MiB/s [2024-12-09T05:23:48.050Z] 5625.22 IOPS, 21.97 MiB/s [2024-12-09T05:23:48.050Z] 5943.50 IOPS, 23.22 MiB/s 01:29:05.594 Latency(us) 01:29:05.594 [2024-12-09T05:23:48.050Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:29:05.594 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 01:29:05.594 Verification LBA range: start 0x0 length 0x4000 01:29:05.594 NVMe0n1 : 10.01 5948.66 23.24 0.00 0.00 21484.01 2675.81 3018433.62 01:29:05.594 [2024-12-09T05:23:48.050Z] =================================================================================================================== 01:29:05.594 [2024-12-09T05:23:48.050Z] Total : 5948.66 23.24 0.00 0.00 21484.01 2675.81 3018433.62 01:29:05.594 { 01:29:05.594 "results": [ 01:29:05.594 { 01:29:05.594 "job": "NVMe0n1", 01:29:05.594 "core_mask": "0x4", 01:29:05.594 "workload": "verify", 01:29:05.594 "status": "finished", 01:29:05.594 "verify_range": { 01:29:05.594 "start": 0, 01:29:05.594 "length": 16384 01:29:05.594 }, 01:29:05.594 "queue_depth": 128, 01:29:05.594 "io_size": 4096, 01:29:05.594 "runtime": 10.010148, 01:29:05.594 "iops": 5948.663296486725, 01:29:05.594 "mibps": 23.23696600190127, 01:29:05.594 "io_failed": 0, 01:29:05.594 "io_timeout": 0, 01:29:05.594 "avg_latency_us": 21484.006417652694, 01:29:05.594 "min_latency_us": 2675.814847161572, 01:29:05.594 "max_latency_us": 3018433.6209606985 01:29:05.594 } 01:29:05.594 ], 01:29:05.594 "core_count": 1 01:29:05.594 } 01:29:05.594 05:23:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=82220 01:29:05.594 05:23:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:29:05.594 05:23:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 01:29:05.594 Running I/O for 10 seconds... 01:29:06.529 05:23:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:29:06.790 7913.00 IOPS, 30.91 MiB/s [2024-12-09T05:23:49.246Z] [2024-12-09 05:23:49.087070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:70576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:06.790 [2024-12-09 05:23:49.087121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.790 [2024-12-09 05:23:49.087139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:70808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.790 [2024-12-09 05:23:49.087146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.790 [2024-12-09 05:23:49.087171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.790 [2024-12-09 05:23:49.087177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.790 [2024-12-09 05:23:49.087186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:70824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.790 [2024-12-09 05:23:49.087192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.790 [2024-12-09 05:23:49.087200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:70832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.790 [2024-12-09 05:23:49.087206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.790 [2024-12-09 05:23:49.087214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:70840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.790 [2024-12-09 05:23:49.087220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.790 [2024-12-09 05:23:49.087227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:70848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.790 [2024-12-09 05:23:49.087233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.790 [2024-12-09 05:23:49.087241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:70856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.791 [2024-12-09 05:23:49.087248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.791 [2024-12-09 05:23:49.087255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:70864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.791 [2024-12-09 05:23:49.087261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.791 [2024-12-09 05:23:49.087269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:70872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.791 [2024-12-09 05:23:49.087275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.791 [2024-12-09 05:23:49.087282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:70880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.791 [2024-12-09 05:23:49.087288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.791 [2024-12-09 05:23:49.087303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:70888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.791 [2024-12-09 05:23:49.087309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.791 [2024-12-09 05:23:49.087317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:70896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.791 [2024-12-09 05:23:49.087338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.791 [2024-12-09 05:23:49.087347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:70904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.791 [2024-12-09 05:23:49.087353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.791 [2024-12-09 05:23:49.087361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:70912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.791 [2024-12-09 05:23:49.087367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.791 [2024-12-09 05:23:49.087375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:70920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.791 [2024-12-09 05:23:49.087381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.791 [2024-12-09 05:23:49.087389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:70928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.791 [2024-12-09 05:23:49.087395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.791 [2024-12-09 05:23:49.087404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:70936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.791 [2024-12-09 05:23:49.087411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.791 [2024-12-09 05:23:49.087418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:70944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.791 [2024-12-09 05:23:49.087424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.791 [2024-12-09 05:23:49.087432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:70952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.791 [2024-12-09 05:23:49.087438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.791 [2024-12-09 05:23:49.087446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:70960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.791 [2024-12-09 05:23:49.087452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.791 [2024-12-09 05:23:49.087459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:70968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.791 [2024-12-09 05:23:49.087465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.791 [2024-12-09 05:23:49.087473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:70976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.791 [2024-12-09 05:23:49.087479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.791 [2024-12-09 05:23:49.087486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:70984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.791 [2024-12-09 05:23:49.087492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.791 [2024-12-09 05:23:49.087499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:70992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.791 [2024-12-09 05:23:49.087505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.791 [2024-12-09 05:23:49.087513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:71000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.791 [2024-12-09 05:23:49.087519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.791 [2024-12-09 05:23:49.087527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:71008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.791 [2024-12-09 05:23:49.087533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.791 [2024-12-09 05:23:49.087540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:71016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.791 [2024-12-09 05:23:49.087546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.791 [2024-12-09 05:23:49.087553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:71024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.791 [2024-12-09 05:23:49.087559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.791 [2024-12-09 05:23:49.087567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:71032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.791 [2024-12-09 05:23:49.087573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.791 [2024-12-09 05:23:49.087581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:71040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.791 [2024-12-09 05:23:49.087586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.791 [2024-12-09 05:23:49.087594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:71048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.791 [2024-12-09 05:23:49.087600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.791 [2024-12-09 05:23:49.087608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:71056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.791 [2024-12-09 05:23:49.087614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.791 [2024-12-09 05:23:49.087624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:71064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.791 [2024-12-09 05:23:49.087630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.791 [2024-12-09 05:23:49.087637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:71072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.791 [2024-12-09 05:23:49.087644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.791 [2024-12-09 05:23:49.087651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:71080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.791 [2024-12-09 05:23:49.087657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.791 [2024-12-09 05:23:49.087666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:71088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.791 [2024-12-09 05:23:49.087672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.791 [2024-12-09 05:23:49.087680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:71096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.791 [2024-12-09 05:23:49.087686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.791 [2024-12-09 05:23:49.087694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:71104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.791 [2024-12-09 05:23:49.087700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.791 [2024-12-09 05:23:49.087707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:71112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.791 [2024-12-09 05:23:49.087713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.791 [2024-12-09 05:23:49.087721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:71120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.791 [2024-12-09 05:23:49.087727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.791 [2024-12-09 05:23:49.087735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:71128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.791 [2024-12-09 05:23:49.087741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.791 [2024-12-09 05:23:49.087748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:71136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.791 [2024-12-09 05:23:49.087754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.791 [2024-12-09 05:23:49.087762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:71144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.791 [2024-12-09 05:23:49.087767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.791 [2024-12-09 05:23:49.087775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:71152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.791 [2024-12-09 05:23:49.087781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.791 [2024-12-09 05:23:49.087789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.791 [2024-12-09 05:23:49.087794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.791 [2024-12-09 05:23:49.087802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:71168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.791 [2024-12-09 05:23:49.087808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.792 [2024-12-09 05:23:49.087815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:71176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.792 [2024-12-09 05:23:49.087821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.792 [2024-12-09 05:23:49.087828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:71184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.792 [2024-12-09 05:23:49.087834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.792 [2024-12-09 05:23:49.087843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:71192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.792 [2024-12-09 05:23:49.087849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.792 [2024-12-09 05:23:49.087857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:71200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.792 [2024-12-09 05:23:49.087862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.792 [2024-12-09 05:23:49.087870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:71208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.792 [2024-12-09 05:23:49.087876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.792 [2024-12-09 05:23:49.087883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:71216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.792 [2024-12-09 05:23:49.087889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.792 [2024-12-09 05:23:49.087897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:71224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.792 [2024-12-09 05:23:49.087903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.792 [2024-12-09 05:23:49.087910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:71232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.792 [2024-12-09 05:23:49.087916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.792 [2024-12-09 05:23:49.087924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:71240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.792 [2024-12-09 05:23:49.087929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.792 [2024-12-09 05:23:49.087937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:71248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.792 [2024-12-09 05:23:49.087943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.792 [2024-12-09 05:23:49.087951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:71256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.792 [2024-12-09 05:23:49.087957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.792 [2024-12-09 05:23:49.087964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:71264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.792 [2024-12-09 05:23:49.087970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.792 [2024-12-09 05:23:49.087977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:71272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.792 [2024-12-09 05:23:49.087983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.792 [2024-12-09 05:23:49.087991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:71280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.792 [2024-12-09 05:23:49.087996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.792 [2024-12-09 05:23:49.088004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:71288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.792 [2024-12-09 05:23:49.088010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.792 [2024-12-09 05:23:49.088024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:71296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.792 [2024-12-09 05:23:49.088030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.792 [2024-12-09 05:23:49.088038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:71304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.792 [2024-12-09 05:23:49.088044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.792 [2024-12-09 05:23:49.088051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:71312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.792 [2024-12-09 05:23:49.088057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.792 [2024-12-09 05:23:49.088065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:71320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.792 [2024-12-09 05:23:49.088071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.792 [2024-12-09 05:23:49.088079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:71328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.792 [2024-12-09 05:23:49.088085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.792 [2024-12-09 05:23:49.088093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:71336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.792 [2024-12-09 05:23:49.088099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.792 [2024-12-09 05:23:49.088106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:71344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.792 [2024-12-09 05:23:49.088112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.792 [2024-12-09 05:23:49.088120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:71352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.792 [2024-12-09 05:23:49.088126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.792 [2024-12-09 05:23:49.088134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:71360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.792 [2024-12-09 05:23:49.088139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.792 [2024-12-09 05:23:49.088147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:71368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.792 [2024-12-09 05:23:49.088153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.792 [2024-12-09 05:23:49.088160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:71376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.792 [2024-12-09 05:23:49.088166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.792 [2024-12-09 05:23:49.088174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:71384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.792 [2024-12-09 05:23:49.088180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.792 [2024-12-09 05:23:49.088187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:71392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.792 [2024-12-09 05:23:49.088194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.792 [2024-12-09 05:23:49.088202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:71400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.792 [2024-12-09 05:23:49.088207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.792 [2024-12-09 05:23:49.088215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:71408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.792 [2024-12-09 05:23:49.088221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.792 [2024-12-09 05:23:49.088228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:71416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.792 [2024-12-09 05:23:49.088234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.792 [2024-12-09 05:23:49.088243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:71424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.792 [2024-12-09 05:23:49.088249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.792 [2024-12-09 05:23:49.088257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:71432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.792 [2024-12-09 05:23:49.088263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.792 [2024-12-09 05:23:49.088270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:71440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.792 [2024-12-09 05:23:49.088277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.792 [2024-12-09 05:23:49.088285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:71448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.792 [2024-12-09 05:23:49.088291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.792 [2024-12-09 05:23:49.088298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:71456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.792 [2024-12-09 05:23:49.088304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.792 [2024-12-09 05:23:49.088312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:71464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.792 [2024-12-09 05:23:49.088317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.792 [2024-12-09 05:23:49.088332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:71472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.792 [2024-12-09 05:23:49.088338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.792 [2024-12-09 05:23:49.088346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:71480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.792 [2024-12-09 05:23:49.088352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.792 [2024-12-09 05:23:49.088359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:71488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.792 [2024-12-09 05:23:49.088365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.793 [2024-12-09 05:23:49.088373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:71496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.793 [2024-12-09 05:23:49.088379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.793 [2024-12-09 05:23:49.088387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:71504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.793 [2024-12-09 05:23:49.088393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.793 [2024-12-09 05:23:49.088400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:71512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.793 [2024-12-09 05:23:49.088406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.793 [2024-12-09 05:23:49.088414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:71520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.793 [2024-12-09 05:23:49.088420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.793 [2024-12-09 05:23:49.088427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:71528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.793 [2024-12-09 05:23:49.088433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.793 [2024-12-09 05:23:49.088440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:71536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.793 [2024-12-09 05:23:49.088446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.793 [2024-12-09 05:23:49.088454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:71544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.793 [2024-12-09 05:23:49.088459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.793 [2024-12-09 05:23:49.088469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:71552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.793 [2024-12-09 05:23:49.088475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.793 [2024-12-09 05:23:49.088482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:71560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.793 [2024-12-09 05:23:49.088488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.793 [2024-12-09 05:23:49.088495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:71568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.793 [2024-12-09 05:23:49.088505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.793 [2024-12-09 05:23:49.088513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:71576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.793 [2024-12-09 05:23:49.088519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.793 [2024-12-09 05:23:49.088527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:70584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:06.793 [2024-12-09 05:23:49.088533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.793 [2024-12-09 05:23:49.088540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:70592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:06.793 [2024-12-09 05:23:49.088546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.793 [2024-12-09 05:23:49.088554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:70600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:06.793 [2024-12-09 05:23:49.088561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.793 [2024-12-09 05:23:49.088568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:70608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:06.793 [2024-12-09 05:23:49.088574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.793 [2024-12-09 05:23:49.088582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:70616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:06.793 [2024-12-09 05:23:49.088588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.793 [2024-12-09 05:23:49.088596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:70624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:06.793 [2024-12-09 05:23:49.088602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.793 [2024-12-09 05:23:49.088610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:70632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:06.793 [2024-12-09 05:23:49.088616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.793 [2024-12-09 05:23:49.088623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:70640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:06.793 [2024-12-09 05:23:49.088629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.793 [2024-12-09 05:23:49.088637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:70648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:06.793 [2024-12-09 05:23:49.088643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.793 [2024-12-09 05:23:49.088651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:70656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:06.793 [2024-12-09 05:23:49.088656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.793 [2024-12-09 05:23:49.088664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:70664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:06.793 [2024-12-09 05:23:49.088670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.793 [2024-12-09 05:23:49.088677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:70672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:06.793 [2024-12-09 05:23:49.088683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.793 [2024-12-09 05:23:49.088692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:70680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:06.793 [2024-12-09 05:23:49.088698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.793 [2024-12-09 05:23:49.088706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:70688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:06.793 [2024-12-09 05:23:49.088712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.793 [2024-12-09 05:23:49.088720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:70696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:06.793 [2024-12-09 05:23:49.088727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.793 [2024-12-09 05:23:49.088734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:71584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:29:06.793 [2024-12-09 05:23:49.088740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.793 [2024-12-09 05:23:49.088748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:70704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:06.793 [2024-12-09 05:23:49.088754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.793 [2024-12-09 05:23:49.088762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:70712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:06.793 [2024-12-09 05:23:49.088768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.793 [2024-12-09 05:23:49.088775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:70720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:06.793 [2024-12-09 05:23:49.088781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.793 [2024-12-09 05:23:49.088790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:70728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:06.793 [2024-12-09 05:23:49.088795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.793 [2024-12-09 05:23:49.088803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:70736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:06.793 [2024-12-09 05:23:49.088809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.793 [2024-12-09 05:23:49.088817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:70744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:06.793 [2024-12-09 05:23:49.088822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.793 [2024-12-09 05:23:49.088830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:70752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:06.793 [2024-12-09 05:23:49.088837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.793 [2024-12-09 05:23:49.088844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:70760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:06.793 [2024-12-09 05:23:49.088850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.793 [2024-12-09 05:23:49.088858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:70768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:06.793 [2024-12-09 05:23:49.088864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.793 [2024-12-09 05:23:49.088872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:70776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:06.793 [2024-12-09 05:23:49.088878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.793 [2024-12-09 05:23:49.088885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:70784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:06.793 [2024-12-09 05:23:49.088891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.793 [2024-12-09 05:23:49.088899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:70792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:06.793 [2024-12-09 05:23:49.088904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.793 [2024-12-09 05:23:49.088913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:70800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:06.794 [2024-12-09 05:23:49.088920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.794 [2024-12-09 05:23:49.088926] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6bfd0 is same with the state(6) to be set 01:29:06.794 [2024-12-09 05:23:49.088934] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:29:06.794 [2024-12-09 05:23:49.088939] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:29:06.794 [2024-12-09 05:23:49.088947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71592 len:8 PRP1 0x0 PRP2 0x0 01:29:06.794 [2024-12-09 05:23:49.088953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:06.794 [2024-12-09 05:23:49.089193] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 01:29:06.794 [2024-12-09 05:23:49.089264] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0de50 (9): Bad file descriptor 01:29:06.794 [2024-12-09 05:23:49.089347] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 01:29:06.794 [2024-12-09 05:23:49.089360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0de50 with addr=10.0.0.3, port=4420 01:29:06.794 [2024-12-09 05:23:49.089367] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0de50 is same with the state(6) to be set 01:29:06.794 [2024-12-09 05:23:49.089379] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0de50 (9): Bad file descriptor 01:29:06.794 [2024-12-09 05:23:49.089390] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 01:29:06.794 [2024-12-09 05:23:49.089396] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 01:29:06.794 [2024-12-09 05:23:49.089403] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 01:29:06.794 [2024-12-09 05:23:49.089411] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 01:29:06.794 [2024-12-09 05:23:49.089419] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 01:29:06.794 05:23:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 01:29:07.730 4411.00 IOPS, 17.23 MiB/s [2024-12-09T05:23:50.186Z] [2024-12-09 05:23:50.087600] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 01:29:07.730 [2024-12-09 05:23:50.087640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0de50 with addr=10.0.0.3, port=4420 01:29:07.730 [2024-12-09 05:23:50.087650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0de50 is same with the state(6) to be set 01:29:07.730 [2024-12-09 05:23:50.087665] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0de50 (9): Bad file descriptor 01:29:07.730 [2024-12-09 05:23:50.087677] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 01:29:07.730 [2024-12-09 05:23:50.087683] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 01:29:07.730 [2024-12-09 05:23:50.087689] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 01:29:07.730 [2024-12-09 05:23:50.087696] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 01:29:07.730 [2024-12-09 05:23:50.087705] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 01:29:08.666 2940.67 IOPS, 11.49 MiB/s [2024-12-09T05:23:51.122Z] [2024-12-09 05:23:51.085881] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 01:29:08.666 [2024-12-09 05:23:51.085920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0de50 with addr=10.0.0.3, port=4420 01:29:08.667 [2024-12-09 05:23:51.085929] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0de50 is same with the state(6) to be set 01:29:08.667 [2024-12-09 05:23:51.085943] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0de50 (9): Bad file descriptor 01:29:08.667 [2024-12-09 05:23:51.085955] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 01:29:08.667 [2024-12-09 05:23:51.085960] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 01:29:08.667 [2024-12-09 05:23:51.085966] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 01:29:08.667 [2024-12-09 05:23:51.085973] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 01:29:08.667 [2024-12-09 05:23:51.085980] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 01:29:09.866 2205.50 IOPS, 8.62 MiB/s [2024-12-09T05:23:52.322Z] [2024-12-09 05:23:52.086775] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 01:29:09.866 [2024-12-09 05:23:52.086818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0de50 with addr=10.0.0.3, port=4420 01:29:09.866 [2024-12-09 05:23:52.086843] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0de50 is same with the state(6) to be set 01:29:09.866 [2024-12-09 05:23:52.087021] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0de50 (9): Bad file descriptor 01:29:09.866 [2024-12-09 05:23:52.087197] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 01:29:09.866 [2024-12-09 05:23:52.087209] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 01:29:09.866 [2024-12-09 05:23:52.087215] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 01:29:09.866 [2024-12-09 05:23:52.087222] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 01:29:09.866 [2024-12-09 05:23:52.087230] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 01:29:09.866 05:23:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:29:09.866 [2024-12-09 05:23:52.295749] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:29:10.128 05:23:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 82220 01:29:10.695 1764.40 IOPS, 6.89 MiB/s [2024-12-09T05:23:53.151Z] [2024-12-09 05:23:53.113363] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 4] Resetting controller successful. 01:29:12.566 2755.50 IOPS, 10.76 MiB/s [2024-12-09T05:23:55.957Z] 3678.43 IOPS, 14.37 MiB/s [2024-12-09T05:23:57.332Z] 4374.62 IOPS, 17.09 MiB/s [2024-12-09T05:23:58.269Z] 4916.11 IOPS, 19.20 MiB/s [2024-12-09T05:23:58.269Z] 5351.50 IOPS, 20.90 MiB/s 01:29:15.813 Latency(us) 01:29:15.813 [2024-12-09T05:23:58.269Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:29:15.813 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 01:29:15.813 Verification LBA range: start 0x0 length 0x4000 01:29:15.813 NVMe0n1 : 10.01 5358.94 20.93 5191.24 0.00 12115.91 486.51 3018433.62 01:29:15.813 [2024-12-09T05:23:58.269Z] =================================================================================================================== 01:29:15.813 [2024-12-09T05:23:58.269Z] Total : 5358.94 20.93 5191.24 0.00 12115.91 0.00 3018433.62 01:29:15.813 { 01:29:15.813 "results": [ 01:29:15.813 { 01:29:15.813 "job": "NVMe0n1", 01:29:15.813 "core_mask": "0x4", 01:29:15.813 "workload": "verify", 01:29:15.813 "status": "finished", 01:29:15.813 "verify_range": { 01:29:15.813 "start": 0, 01:29:15.813 "length": 16384 01:29:15.813 }, 01:29:15.813 "queue_depth": 128, 01:29:15.813 "io_size": 4096, 01:29:15.813 "runtime": 10.011862, 01:29:15.813 "iops": 5358.943221550597, 01:29:15.813 "mibps": 20.93337195918202, 01:29:15.813 "io_failed": 51974, 01:29:15.813 "io_timeout": 0, 01:29:15.813 "avg_latency_us": 12115.909339608692, 01:29:15.813 "min_latency_us": 486.5117903930131, 01:29:15.813 "max_latency_us": 3018433.6209606985 01:29:15.813 } 01:29:15.813 ], 01:29:15.813 "core_count": 1 01:29:15.813 } 01:29:15.813 05:23:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 82092 01:29:15.813 05:23:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82092 ']' 01:29:15.813 05:23:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82092 01:29:15.813 05:23:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 01:29:15.813 05:23:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:29:15.813 05:23:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82092 01:29:15.813 killing process with pid 82092 01:29:15.813 05:23:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 01:29:15.813 05:23:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 01:29:15.813 05:23:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82092' 01:29:15.813 05:23:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82092 01:29:15.813 Received shutdown signal, test time was about 10.000000 seconds 01:29:15.813 01:29:15.813 Latency(us) 01:29:15.813 [2024-12-09T05:23:58.269Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:29:15.813 [2024-12-09T05:23:58.269Z] =================================================================================================================== 01:29:15.813 [2024-12-09T05:23:58.269Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:29:15.813 05:23:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82092 01:29:15.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:29:15.813 05:23:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 01:29:15.813 05:23:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=82334 01:29:15.813 05:23:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 82334 /var/tmp/bdevperf.sock 01:29:15.813 05:23:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82334 ']' 01:29:15.813 05:23:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:29:15.813 05:23:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 01:29:15.813 05:23:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:29:15.813 05:23:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 01:29:15.813 05:23:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 01:29:15.813 [2024-12-09 05:23:58.255704] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:29:15.813 [2024-12-09 05:23:58.255774] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82334 ] 01:29:16.073 [2024-12-09 05:23:58.383461] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:29:16.073 [2024-12-09 05:23:58.430522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:29:16.073 [2024-12-09 05:23:58.470886] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:29:17.010 05:23:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:29:17.010 05:23:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 01:29:17.010 05:23:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 82334 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 01:29:17.010 05:23:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=82346 01:29:17.010 05:23:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 01:29:17.010 05:23:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 01:29:17.270 NVMe0n1 01:29:17.270 05:23:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=82387 01:29:17.270 05:23:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:29:17.270 05:23:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 01:29:17.270 Running I/O for 10 seconds... 01:29:18.208 05:24:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:29:18.470 16094.00 IOPS, 62.87 MiB/s [2024-12-09T05:24:00.926Z] [2024-12-09 05:24:00.800963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.470 [2024-12-09 05:24:00.801022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.470 [2024-12-09 05:24:00.801029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.470 [2024-12-09 05:24:00.801035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.470 [2024-12-09 05:24:00.801040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.470 [2024-12-09 05:24:00.801045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.470 [2024-12-09 05:24:00.801050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.470 [2024-12-09 05:24:00.801056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.470 [2024-12-09 05:24:00.801060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.470 [2024-12-09 05:24:00.801065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.470 [2024-12-09 05:24:00.801069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.470 [2024-12-09 05:24:00.801074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.470 [2024-12-09 05:24:00.801079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.470 [2024-12-09 05:24:00.801084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.470 [2024-12-09 05:24:00.801088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.470 [2024-12-09 05:24:00.801093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.470 [2024-12-09 05:24:00.801098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.470 [2024-12-09 05:24:00.801103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.470 [2024-12-09 05:24:00.801108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.470 [2024-12-09 05:24:00.801113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.470 [2024-12-09 05:24:00.801118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.470 [2024-12-09 05:24:00.801123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.470 [2024-12-09 05:24:00.801127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.470 [2024-12-09 05:24:00.801132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.470 [2024-12-09 05:24:00.801137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.470 [2024-12-09 05:24:00.801142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.470 [2024-12-09 05:24:00.801147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.470 [2024-12-09 05:24:00.801151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.470 [2024-12-09 05:24:00.801157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.470 [2024-12-09 05:24:00.801174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.470 [2024-12-09 05:24:00.801179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.470 [2024-12-09 05:24:00.801185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.470 [2024-12-09 05:24:00.801190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.470 [2024-12-09 05:24:00.801196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.471 [2024-12-09 05:24:00.801710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.472 [2024-12-09 05:24:00.801715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.472 [2024-12-09 05:24:00.801720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.472 [2024-12-09 05:24:00.801725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.472 [2024-12-09 05:24:00.801729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.472 [2024-12-09 05:24:00.801744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.472 [2024-12-09 05:24:00.801750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.472 [2024-12-09 05:24:00.801755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.472 [2024-12-09 05:24:00.801760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.472 [2024-12-09 05:24:00.801765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190aac0 is same with the state(6) to be set 01:29:18.472 [2024-12-09 05:24:00.801833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:99648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.472 [2024-12-09 05:24:00.801861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.472 [2024-12-09 05:24:00.801878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:113304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.472 [2024-12-09 05:24:00.801884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.472 [2024-12-09 05:24:00.801892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:123208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.472 [2024-12-09 05:24:00.801897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.472 [2024-12-09 05:24:00.801904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:64760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.472 [2024-12-09 05:24:00.801909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.472 [2024-12-09 05:24:00.801916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:107232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.472 [2024-12-09 05:24:00.801921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.472 [2024-12-09 05:24:00.801928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:78456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.472 [2024-12-09 05:24:00.801933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.472 [2024-12-09 05:24:00.801939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:50904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.472 [2024-12-09 05:24:00.801944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.472 [2024-12-09 05:24:00.801950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.472 [2024-12-09 05:24:00.801956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.472 [2024-12-09 05:24:00.801962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.472 [2024-12-09 05:24:00.801967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.472 [2024-12-09 05:24:00.801974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:70072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.472 [2024-12-09 05:24:00.801979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.472 [2024-12-09 05:24:00.801985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:84368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.472 [2024-12-09 05:24:00.801990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.472 [2024-12-09 05:24:00.801996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:113560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.472 [2024-12-09 05:24:00.802001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.472 [2024-12-09 05:24:00.802008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:32712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.472 [2024-12-09 05:24:00.802013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.472 [2024-12-09 05:24:00.802019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:49584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.472 [2024-12-09 05:24:00.802024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.472 [2024-12-09 05:24:00.802031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:116856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.472 [2024-12-09 05:24:00.802035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.472 [2024-12-09 05:24:00.802042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:56560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.472 [2024-12-09 05:24:00.802046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.472 [2024-12-09 05:24:00.802053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:57696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.472 [2024-12-09 05:24:00.802060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.472 [2024-12-09 05:24:00.802067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:92440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.472 [2024-12-09 05:24:00.802072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.472 [2024-12-09 05:24:00.802079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:87112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.472 [2024-12-09 05:24:00.802083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.472 [2024-12-09 05:24:00.802089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:60720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.472 [2024-12-09 05:24:00.802095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.472 [2024-12-09 05:24:00.802101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:56240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.472 [2024-12-09 05:24:00.802107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.472 [2024-12-09 05:24:00.802113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:23600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.472 [2024-12-09 05:24:00.802118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.472 [2024-12-09 05:24:00.802124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:48920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.472 [2024-12-09 05:24:00.802129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.472 [2024-12-09 05:24:00.802135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:3416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.472 [2024-12-09 05:24:00.802140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.472 [2024-12-09 05:24:00.802147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:18344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.472 [2024-12-09 05:24:00.802151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.472 [2024-12-09 05:24:00.802158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:52424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.472 [2024-12-09 05:24:00.802163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.472 [2024-12-09 05:24:00.802169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:52360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.472 [2024-12-09 05:24:00.802174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.472 [2024-12-09 05:24:00.802180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:50160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.472 [2024-12-09 05:24:00.802185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.472 [2024-12-09 05:24:00.802192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:79784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.472 [2024-12-09 05:24:00.802196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.472 [2024-12-09 05:24:00.802204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.472 [2024-12-09 05:24:00.802209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.472 [2024-12-09 05:24:00.802215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:16848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.472 [2024-12-09 05:24:00.802220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.472 [2024-12-09 05:24:00.802227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:85240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.472 [2024-12-09 05:24:00.802231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.472 [2024-12-09 05:24:00.802238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:59280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.472 [2024-12-09 05:24:00.802244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.472 [2024-12-09 05:24:00.802251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.472 [2024-12-09 05:24:00.802256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.472 [2024-12-09 05:24:00.802263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:5504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.472 [2024-12-09 05:24:00.802268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.473 [2024-12-09 05:24:00.802274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:125424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.473 [2024-12-09 05:24:00.802279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.473 [2024-12-09 05:24:00.802285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:109776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.473 [2024-12-09 05:24:00.802291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.473 [2024-12-09 05:24:00.802297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:93080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.473 [2024-12-09 05:24:00.802302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.473 [2024-12-09 05:24:00.802308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:49096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.473 [2024-12-09 05:24:00.802313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.473 [2024-12-09 05:24:00.802319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.473 [2024-12-09 05:24:00.802324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.473 [2024-12-09 05:24:00.802331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:83528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.473 [2024-12-09 05:24:00.802336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.473 [2024-12-09 05:24:00.802342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:98744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.473 [2024-12-09 05:24:00.802347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.473 [2024-12-09 05:24:00.802361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:100928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.473 [2024-12-09 05:24:00.802367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.473 [2024-12-09 05:24:00.802373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:82256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.473 [2024-12-09 05:24:00.802378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.473 [2024-12-09 05:24:00.802385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:93264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.473 [2024-12-09 05:24:00.802389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.473 [2024-12-09 05:24:00.802396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:121016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.473 [2024-12-09 05:24:00.802401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.473 [2024-12-09 05:24:00.802407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:57376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.473 [2024-12-09 05:24:00.802412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.473 [2024-12-09 05:24:00.802418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:19008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.473 [2024-12-09 05:24:00.802423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.473 [2024-12-09 05:24:00.802435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:114904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.473 [2024-12-09 05:24:00.802440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.473 [2024-12-09 05:24:00.802447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:115512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.473 [2024-12-09 05:24:00.802453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.473 [2024-12-09 05:24:00.802459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:114168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.473 [2024-12-09 05:24:00.802464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.473 [2024-12-09 05:24:00.802471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:50232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.473 [2024-12-09 05:24:00.802476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.473 [2024-12-09 05:24:00.802482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:24400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.473 [2024-12-09 05:24:00.802487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.473 [2024-12-09 05:24:00.802494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:24992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.473 [2024-12-09 05:24:00.802499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.473 [2024-12-09 05:24:00.802505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:66032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.473 [2024-12-09 05:24:00.802510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.473 [2024-12-09 05:24:00.802516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:64560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.473 [2024-12-09 05:24:00.802521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.473 [2024-12-09 05:24:00.802527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:60544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.473 [2024-12-09 05:24:00.802532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.473 [2024-12-09 05:24:00.802539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:70040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.473 [2024-12-09 05:24:00.802544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.473 [2024-12-09 05:24:00.802550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:89008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.473 [2024-12-09 05:24:00.802555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.473 [2024-12-09 05:24:00.802561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:66744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.473 [2024-12-09 05:24:00.802567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.473 [2024-12-09 05:24:00.802583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:83672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.473 [2024-12-09 05:24:00.802588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.473 [2024-12-09 05:24:00.802595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.473 [2024-12-09 05:24:00.802599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.473 [2024-12-09 05:24:00.802606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:113760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.473 [2024-12-09 05:24:00.802611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.473 [2024-12-09 05:24:00.802617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:116176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.473 [2024-12-09 05:24:00.802622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.473 [2024-12-09 05:24:00.802630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:87616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.473 [2024-12-09 05:24:00.802635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.473 [2024-12-09 05:24:00.802642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:20712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.473 [2024-12-09 05:24:00.802647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.473 [2024-12-09 05:24:00.802653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:59032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.473 [2024-12-09 05:24:00.802658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.473 [2024-12-09 05:24:00.802665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:60672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.473 [2024-12-09 05:24:00.802671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.473 [2024-12-09 05:24:00.802677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:48160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.473 [2024-12-09 05:24:00.802682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.473 [2024-12-09 05:24:00.802688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:98120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.473 [2024-12-09 05:24:00.802693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.473 [2024-12-09 05:24:00.802700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:97928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.473 [2024-12-09 05:24:00.802704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.473 [2024-12-09 05:24:00.802711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:73032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.474 [2024-12-09 05:24:00.802716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.474 [2024-12-09 05:24:00.802722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:90952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.474 [2024-12-09 05:24:00.802727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.474 [2024-12-09 05:24:00.802734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:122200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.474 [2024-12-09 05:24:00.802739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.474 [2024-12-09 05:24:00.802745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:31192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.474 [2024-12-09 05:24:00.802750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.474 [2024-12-09 05:24:00.802757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:32552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.474 [2024-12-09 05:24:00.802762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.474 [2024-12-09 05:24:00.802768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:61496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.474 [2024-12-09 05:24:00.802773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.474 [2024-12-09 05:24:00.802779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:52304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.474 [2024-12-09 05:24:00.802784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.474 [2024-12-09 05:24:00.802790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:44952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.474 [2024-12-09 05:24:00.802795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.474 [2024-12-09 05:24:00.802802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:102576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.474 [2024-12-09 05:24:00.802806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.474 [2024-12-09 05:24:00.802814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:110784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.474 [2024-12-09 05:24:00.802820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.474 [2024-12-09 05:24:00.802826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:2808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.474 [2024-12-09 05:24:00.802832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.474 [2024-12-09 05:24:00.802838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:75896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.474 [2024-12-09 05:24:00.802843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.474 [2024-12-09 05:24:00.802849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:38224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.474 [2024-12-09 05:24:00.802855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.474 [2024-12-09 05:24:00.802862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:52288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.474 [2024-12-09 05:24:00.802867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.474 [2024-12-09 05:24:00.802873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:26016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.474 [2024-12-09 05:24:00.802878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.474 [2024-12-09 05:24:00.802884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:20128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.474 [2024-12-09 05:24:00.802889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.474 [2024-12-09 05:24:00.802896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:116256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.474 [2024-12-09 05:24:00.802900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.474 [2024-12-09 05:24:00.802907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:29616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.474 [2024-12-09 05:24:00.802912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.474 [2024-12-09 05:24:00.802918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:91640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.474 [2024-12-09 05:24:00.802923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.474 [2024-12-09 05:24:00.802929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:64936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.474 [2024-12-09 05:24:00.802934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.474 [2024-12-09 05:24:00.802941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:90520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.474 [2024-12-09 05:24:00.802946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.474 [2024-12-09 05:24:00.802952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:29032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.474 [2024-12-09 05:24:00.802957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.474 [2024-12-09 05:24:00.802964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:25432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.474 [2024-12-09 05:24:00.802969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.474 [2024-12-09 05:24:00.802975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:4808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.474 [2024-12-09 05:24:00.802980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.474 [2024-12-09 05:24:00.802986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:7112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.474 [2024-12-09 05:24:00.802991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.474 [2024-12-09 05:24:00.802999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:31048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.474 [2024-12-09 05:24:00.803004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.474 [2024-12-09 05:24:00.803011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:45680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.474 [2024-12-09 05:24:00.803016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.474 [2024-12-09 05:24:00.803023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:40072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.474 [2024-12-09 05:24:00.803028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.474 [2024-12-09 05:24:00.803034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:110120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.474 [2024-12-09 05:24:00.803039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.474 [2024-12-09 05:24:00.803046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:39104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.474 [2024-12-09 05:24:00.803051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.474 [2024-12-09 05:24:00.803058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:95640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.474 [2024-12-09 05:24:00.803063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.474 [2024-12-09 05:24:00.803069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:76584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.474 [2024-12-09 05:24:00.803074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.474 [2024-12-09 05:24:00.803081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:51136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.474 [2024-12-09 05:24:00.803086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.474 [2024-12-09 05:24:00.803092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:26280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.474 [2024-12-09 05:24:00.803098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.474 [2024-12-09 05:24:00.803104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:126088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.474 [2024-12-09 05:24:00.803109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.474 [2024-12-09 05:24:00.803116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:41024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.474 [2024-12-09 05:24:00.803121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.474 [2024-12-09 05:24:00.803128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:113944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.474 [2024-12-09 05:24:00.803133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.475 [2024-12-09 05:24:00.803139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:111472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.475 [2024-12-09 05:24:00.803143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.475 [2024-12-09 05:24:00.803150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:115840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.475 [2024-12-09 05:24:00.803157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.475 [2024-12-09 05:24:00.803164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:106216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.475 [2024-12-09 05:24:00.803168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.475 [2024-12-09 05:24:00.803175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:47152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.475 [2024-12-09 05:24:00.803180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.475 [2024-12-09 05:24:00.803188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:22768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.475 [2024-12-09 05:24:00.803194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.475 [2024-12-09 05:24:00.803200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:2128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.475 [2024-12-09 05:24:00.803205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.475 [2024-12-09 05:24:00.803212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:80856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.475 [2024-12-09 05:24:00.803217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.475 [2024-12-09 05:24:00.803223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:10376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.475 [2024-12-09 05:24:00.803228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.475 [2024-12-09 05:24:00.803235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:10904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.475 [2024-12-09 05:24:00.803240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.475 [2024-12-09 05:24:00.803246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:12344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.475 [2024-12-09 05:24:00.803251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.475 [2024-12-09 05:24:00.803258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:19752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.475 [2024-12-09 05:24:00.803262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.475 [2024-12-09 05:24:00.803269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:21288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.475 [2024-12-09 05:24:00.803274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.475 [2024-12-09 05:24:00.803280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:58688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.475 [2024-12-09 05:24:00.803286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.475 [2024-12-09 05:24:00.803292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:94712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.475 [2024-12-09 05:24:00.803297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.475 [2024-12-09 05:24:00.803312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:88704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.475 [2024-12-09 05:24:00.803317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.475 [2024-12-09 05:24:00.803345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:98968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.475 [2024-12-09 05:24:00.803351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.475 [2024-12-09 05:24:00.803357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:40776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.475 [2024-12-09 05:24:00.803363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.475 [2024-12-09 05:24:00.803369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:69848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.475 [2024-12-09 05:24:00.803376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.475 [2024-12-09 05:24:00.803383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:85248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:29:18.475 [2024-12-09 05:24:00.803388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.475 [2024-12-09 05:24:00.803395] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cbe20 is same with the state(6) to be set 01:29:18.475 [2024-12-09 05:24:00.803402] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:29:18.475 [2024-12-09 05:24:00.803407] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:29:18.475 [2024-12-09 05:24:00.803413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5848 len:8 PRP1 0x0 PRP2 0x0 01:29:18.475 [2024-12-09 05:24:00.803418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:29:18.475 [2024-12-09 05:24:00.803670] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 01:29:18.475 [2024-12-09 05:24:00.803733] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x125ee50 (9): Bad file descriptor 01:29:18.475 [2024-12-09 05:24:00.803809] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 01:29:18.475 [2024-12-09 05:24:00.803820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125ee50 with addr=10.0.0.3, port=4420 01:29:18.475 [2024-12-09 05:24:00.803826] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125ee50 is same with the state(6) to be set 01:29:18.476 [2024-12-09 05:24:00.803837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x125ee50 (9): Bad file descriptor 01:29:18.476 [2024-12-09 05:24:00.803846] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 01:29:18.476 [2024-12-09 05:24:00.803852] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 01:29:18.476 [2024-12-09 05:24:00.803859] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 01:29:18.476 [2024-12-09 05:24:00.803866] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 01:29:18.476 [2024-12-09 05:24:00.803872] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 01:29:18.476 05:24:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 82387 01:29:20.351 8816.00 IOPS, 34.44 MiB/s [2024-12-09T05:24:02.807Z] 5877.33 IOPS, 22.96 MiB/s [2024-12-09T05:24:02.807Z] [2024-12-09 05:24:02.800238] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 01:29:20.351 [2024-12-09 05:24:02.800291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125ee50 with addr=10.0.0.3, port=4420 01:29:20.351 [2024-12-09 05:24:02.800303] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125ee50 is same with the state(6) to be set 01:29:20.351 [2024-12-09 05:24:02.800320] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x125ee50 (9): Bad file descriptor 01:29:20.351 [2024-12-09 05:24:02.800345] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 01:29:20.351 [2024-12-09 05:24:02.800351] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 01:29:20.351 [2024-12-09 05:24:02.800359] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 01:29:20.351 [2024-12-09 05:24:02.800367] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 01:29:20.351 [2024-12-09 05:24:02.800375] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 01:29:22.663 4408.00 IOPS, 17.22 MiB/s [2024-12-09T05:24:05.119Z] 3526.40 IOPS, 13.78 MiB/s [2024-12-09T05:24:05.119Z] [2024-12-09 05:24:04.796765] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 01:29:22.663 [2024-12-09 05:24:04.796878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125ee50 with addr=10.0.0.3, port=4420 01:29:22.663 [2024-12-09 05:24:04.796919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125ee50 is same with the state(6) to be set 01:29:22.663 [2024-12-09 05:24:04.796959] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x125ee50 (9): Bad file descriptor 01:29:22.663 [2024-12-09 05:24:04.796995] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 01:29:22.663 [2024-12-09 05:24:04.797087] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 01:29:22.663 [2024-12-09 05:24:04.797129] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 01:29:22.663 [2024-12-09 05:24:04.797155] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 01:29:22.663 [2024-12-09 05:24:04.797194] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 01:29:24.533 2938.67 IOPS, 11.48 MiB/s [2024-12-09T05:24:06.989Z] 2518.86 IOPS, 9.84 MiB/s [2024-12-09T05:24:06.989Z] [2024-12-09 05:24:06.793411] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 01:29:24.533 [2024-12-09 05:24:06.793500] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 01:29:24.533 [2024-12-09 05:24:06.793537] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 01:29:24.533 [2024-12-09 05:24:06.793566] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 01:29:24.533 [2024-12-09 05:24:06.793596] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 01:29:25.470 2204.00 IOPS, 8.61 MiB/s 01:29:25.470 Latency(us) 01:29:25.470 [2024-12-09T05:24:07.926Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:29:25.470 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 01:29:25.470 NVMe0n1 : 8.10 2175.94 8.50 15.80 0.00 58447.11 1159.04 7033243.39 01:29:25.470 [2024-12-09T05:24:07.926Z] =================================================================================================================== 01:29:25.470 [2024-12-09T05:24:07.926Z] Total : 2175.94 8.50 15.80 0.00 58447.11 1159.04 7033243.39 01:29:25.470 { 01:29:25.470 "results": [ 01:29:25.470 { 01:29:25.470 "job": "NVMe0n1", 01:29:25.470 "core_mask": "0x4", 01:29:25.470 "workload": "randread", 01:29:25.470 "status": "finished", 01:29:25.470 "queue_depth": 128, 01:29:25.470 "io_size": 4096, 01:29:25.470 "runtime": 8.103153, 01:29:25.470 "iops": 2175.943117450701, 01:29:25.470 "mibps": 8.4997778025418, 01:29:25.470 "io_failed": 128, 01:29:25.470 "io_timeout": 0, 01:29:25.470 "avg_latency_us": 58447.10630630631, 01:29:25.470 "min_latency_us": 1159.0427947598253, 01:29:25.470 "max_latency_us": 7033243.388646288 01:29:25.470 } 01:29:25.470 ], 01:29:25.470 "core_count": 1 01:29:25.470 } 01:29:25.470 05:24:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:29:25.470 Attaching 5 probes... 01:29:25.470 1212.146467: reset bdev controller NVMe0 01:29:25.470 1212.247137: reconnect bdev controller NVMe0 01:29:25.470 3208.623225: reconnect delay bdev controller NVMe0 01:29:25.470 3208.641645: reconnect bdev controller NVMe0 01:29:25.470 5205.145904: reconnect delay bdev controller NVMe0 01:29:25.470 5205.165360: reconnect bdev controller NVMe0 01:29:25.470 7201.871534: reconnect delay bdev controller NVMe0 01:29:25.470 7201.890797: reconnect bdev controller NVMe0 01:29:25.471 05:24:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 01:29:25.471 05:24:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 01:29:25.471 05:24:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 82346 01:29:25.471 05:24:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:29:25.471 05:24:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 82334 01:29:25.471 05:24:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82334 ']' 01:29:25.471 05:24:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82334 01:29:25.471 05:24:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 01:29:25.471 05:24:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:29:25.471 05:24:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82334 01:29:25.471 05:24:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 01:29:25.471 05:24:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 01:29:25.471 05:24:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82334' 01:29:25.471 killing process with pid 82334 01:29:25.471 05:24:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82334 01:29:25.471 Received shutdown signal, test time was about 8.197740 seconds 01:29:25.471 01:29:25.471 Latency(us) 01:29:25.471 [2024-12-09T05:24:07.927Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:29:25.471 [2024-12-09T05:24:07.927Z] =================================================================================================================== 01:29:25.471 [2024-12-09T05:24:07.927Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:29:25.471 05:24:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82334 01:29:25.730 05:24:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:29:25.990 05:24:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 01:29:25.990 05:24:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 01:29:25.990 05:24:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 01:29:25.990 05:24:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 01:29:25.990 05:24:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:29:25.990 05:24:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 01:29:25.990 05:24:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 01:29:25.990 05:24:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:29:25.990 rmmod nvme_tcp 01:29:25.990 rmmod nvme_fabrics 01:29:25.990 rmmod nvme_keyring 01:29:25.990 05:24:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:29:25.990 05:24:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 01:29:25.990 05:24:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 01:29:25.990 05:24:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@517 -- # '[' -n 81898 ']' 01:29:25.990 05:24:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # killprocess 81898 01:29:25.990 05:24:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 81898 ']' 01:29:25.990 05:24:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 81898 01:29:25.990 05:24:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 01:29:25.990 05:24:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:29:25.990 05:24:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81898 01:29:25.990 killing process with pid 81898 01:29:25.990 05:24:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:29:25.990 05:24:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:29:25.990 05:24:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81898' 01:29:25.990 05:24:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 81898 01:29:25.990 05:24:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 81898 01:29:26.558 05:24:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:29:26.558 05:24:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:29:26.558 05:24:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:29:26.558 05:24:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 01:29:26.558 05:24:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-save 01:29:26.558 05:24:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:29:26.558 05:24:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-restore 01:29:26.558 05:24:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:29:26.558 05:24:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:29:26.558 05:24:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:29:26.558 05:24:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:29:26.558 05:24:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:29:26.558 05:24:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:29:26.558 05:24:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:29:26.558 05:24:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:29:26.558 05:24:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:29:26.558 05:24:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:29:26.558 05:24:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:29:26.558 05:24:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:29:26.558 05:24:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:29:26.558 05:24:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:29:26.818 05:24:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:29:26.818 05:24:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 01:29:26.818 05:24:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:29:26.818 05:24:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:29:26.818 05:24:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:29:26.818 05:24:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 01:29:26.818 ************************************ 01:29:26.818 END TEST nvmf_timeout 01:29:26.818 ************************************ 01:29:26.818 01:29:26.818 real 0m46.414s 01:29:26.818 user 2m14.591s 01:29:26.818 sys 0m5.345s 01:29:26.818 05:24:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 01:29:26.818 05:24:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 01:29:26.818 05:24:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 01:29:26.818 05:24:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 01:29:26.818 ************************************ 01:29:26.818 END TEST nvmf_host 01:29:26.818 ************************************ 01:29:26.818 01:29:26.818 real 5m4.118s 01:29:26.818 user 12m56.270s 01:29:26.818 sys 1m6.214s 01:29:26.818 05:24:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 01:29:26.818 05:24:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 01:29:26.818 05:24:09 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 01:29:26.818 05:24:09 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 1 -eq 0 ]] 01:29:26.818 ************************************ 01:29:26.818 END TEST nvmf_tcp 01:29:26.818 ************************************ 01:29:26.818 01:29:26.818 real 12m18.195s 01:29:26.818 user 29m2.044s 01:29:26.818 sys 3m2.965s 01:29:26.818 05:24:09 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 01:29:26.818 05:24:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:29:26.818 05:24:09 -- spdk/autotest.sh@285 -- # [[ 1 -eq 0 ]] 01:29:26.818 05:24:09 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 01:29:26.818 05:24:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:29:26.818 05:24:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:29:26.818 05:24:09 -- common/autotest_common.sh@10 -- # set +x 01:29:26.818 ************************************ 01:29:26.818 START TEST nvmf_dif 01:29:26.818 ************************************ 01:29:26.818 05:24:09 nvmf_dif -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 01:29:27.078 * Looking for test storage... 01:29:27.078 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:29:27.078 05:24:09 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:29:27.078 05:24:09 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 01:29:27.078 05:24:09 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:29:27.078 05:24:09 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:29:27.078 05:24:09 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:29:27.078 05:24:09 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 01:29:27.078 05:24:09 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 01:29:27.078 05:24:09 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 01:29:27.078 05:24:09 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 01:29:27.078 05:24:09 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 01:29:27.078 05:24:09 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 01:29:27.078 05:24:09 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 01:29:27.078 05:24:09 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 01:29:27.078 05:24:09 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 01:29:27.078 05:24:09 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:29:27.078 05:24:09 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 01:29:27.078 05:24:09 nvmf_dif -- scripts/common.sh@345 -- # : 1 01:29:27.078 05:24:09 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 01:29:27.078 05:24:09 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:29:27.078 05:24:09 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 01:29:27.078 05:24:09 nvmf_dif -- scripts/common.sh@353 -- # local d=1 01:29:27.078 05:24:09 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:29:27.078 05:24:09 nvmf_dif -- scripts/common.sh@355 -- # echo 1 01:29:27.078 05:24:09 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 01:29:27.078 05:24:09 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 01:29:27.078 05:24:09 nvmf_dif -- scripts/common.sh@353 -- # local d=2 01:29:27.078 05:24:09 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:29:27.078 05:24:09 nvmf_dif -- scripts/common.sh@355 -- # echo 2 01:29:27.078 05:24:09 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 01:29:27.078 05:24:09 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:29:27.078 05:24:09 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:29:27.078 05:24:09 nvmf_dif -- scripts/common.sh@368 -- # return 0 01:29:27.078 05:24:09 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:29:27.078 05:24:09 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:29:27.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:29:27.078 --rc genhtml_branch_coverage=1 01:29:27.078 --rc genhtml_function_coverage=1 01:29:27.078 --rc genhtml_legend=1 01:29:27.078 --rc geninfo_all_blocks=1 01:29:27.078 --rc geninfo_unexecuted_blocks=1 01:29:27.078 01:29:27.078 ' 01:29:27.078 05:24:09 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:29:27.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:29:27.078 --rc genhtml_branch_coverage=1 01:29:27.078 --rc genhtml_function_coverage=1 01:29:27.078 --rc genhtml_legend=1 01:29:27.078 --rc geninfo_all_blocks=1 01:29:27.078 --rc geninfo_unexecuted_blocks=1 01:29:27.078 01:29:27.078 ' 01:29:27.078 05:24:09 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:29:27.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:29:27.078 --rc genhtml_branch_coverage=1 01:29:27.078 --rc genhtml_function_coverage=1 01:29:27.078 --rc genhtml_legend=1 01:29:27.078 --rc geninfo_all_blocks=1 01:29:27.078 --rc geninfo_unexecuted_blocks=1 01:29:27.078 01:29:27.078 ' 01:29:27.078 05:24:09 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:29:27.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:29:27.078 --rc genhtml_branch_coverage=1 01:29:27.078 --rc genhtml_function_coverage=1 01:29:27.078 --rc genhtml_legend=1 01:29:27.078 --rc geninfo_all_blocks=1 01:29:27.078 --rc geninfo_unexecuted_blocks=1 01:29:27.078 01:29:27.078 ' 01:29:27.078 05:24:09 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:29:27.078 05:24:09 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 01:29:27.078 05:24:09 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:29:27.078 05:24:09 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:29:27.078 05:24:09 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:29:27.078 05:24:09 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:29:27.078 05:24:09 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:29:27.078 05:24:09 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:29:27.078 05:24:09 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:29:27.078 05:24:09 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:29:27.078 05:24:09 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:29:27.078 05:24:09 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:29:27.078 05:24:09 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:29:27.078 05:24:09 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:29:27.078 05:24:09 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:29:27.078 05:24:09 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:29:27.078 05:24:09 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:29:27.078 05:24:09 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:29:27.078 05:24:09 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:29:27.078 05:24:09 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 01:29:27.078 05:24:09 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:29:27.078 05:24:09 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:29:27.079 05:24:09 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:29:27.079 05:24:09 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:29:27.079 05:24:09 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:29:27.079 05:24:09 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:29:27.079 05:24:09 nvmf_dif -- paths/export.sh@5 -- # export PATH 01:29:27.079 05:24:09 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:29:27.079 05:24:09 nvmf_dif -- nvmf/common.sh@51 -- # : 0 01:29:27.079 05:24:09 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:29:27.079 05:24:09 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:29:27.079 05:24:09 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:29:27.079 05:24:09 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:29:27.079 05:24:09 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:29:27.079 05:24:09 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:29:27.079 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:29:27.079 05:24:09 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:29:27.079 05:24:09 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:29:27.079 05:24:09 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 01:29:27.079 05:24:09 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 01:29:27.079 05:24:09 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 01:29:27.079 05:24:09 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 01:29:27.079 05:24:09 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 01:29:27.079 05:24:09 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 01:29:27.079 05:24:09 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:29:27.079 05:24:09 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:29:27.079 05:24:09 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 01:29:27.079 05:24:09 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 01:29:27.079 05:24:09 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 01:29:27.079 05:24:09 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:29:27.079 05:24:09 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 01:29:27.079 05:24:09 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:29:27.079 05:24:09 nvmf_dif -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:29:27.079 05:24:09 nvmf_dif -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:29:27.079 05:24:09 nvmf_dif -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:29:27.079 05:24:09 nvmf_dif -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:29:27.079 05:24:09 nvmf_dif -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:29:27.079 05:24:09 nvmf_dif -- nvmf/common.sh@460 -- # nvmf_veth_init 01:29:27.079 05:24:09 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:29:27.079 05:24:09 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:29:27.079 05:24:09 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:29:27.079 05:24:09 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:29:27.079 05:24:09 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:29:27.079 05:24:09 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:29:27.079 05:24:09 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:29:27.079 05:24:09 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:29:27.079 05:24:09 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:29:27.079 05:24:09 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:29:27.079 05:24:09 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:29:27.079 05:24:09 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:29:27.079 05:24:09 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:29:27.079 05:24:09 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:29:27.079 05:24:09 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:29:27.079 05:24:09 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:29:27.079 05:24:09 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:29:27.339 Cannot find device "nvmf_init_br" 01:29:27.339 05:24:09 nvmf_dif -- nvmf/common.sh@162 -- # true 01:29:27.339 05:24:09 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:29:27.339 Cannot find device "nvmf_init_br2" 01:29:27.339 05:24:09 nvmf_dif -- nvmf/common.sh@163 -- # true 01:29:27.339 05:24:09 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:29:27.339 Cannot find device "nvmf_tgt_br" 01:29:27.339 05:24:09 nvmf_dif -- nvmf/common.sh@164 -- # true 01:29:27.339 05:24:09 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:29:27.339 Cannot find device "nvmf_tgt_br2" 01:29:27.339 05:24:09 nvmf_dif -- nvmf/common.sh@165 -- # true 01:29:27.339 05:24:09 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:29:27.339 Cannot find device "nvmf_init_br" 01:29:27.339 05:24:09 nvmf_dif -- nvmf/common.sh@166 -- # true 01:29:27.339 05:24:09 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:29:27.339 Cannot find device "nvmf_init_br2" 01:29:27.339 05:24:09 nvmf_dif -- nvmf/common.sh@167 -- # true 01:29:27.339 05:24:09 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:29:27.339 Cannot find device "nvmf_tgt_br" 01:29:27.339 05:24:09 nvmf_dif -- nvmf/common.sh@168 -- # true 01:29:27.339 05:24:09 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:29:27.339 Cannot find device "nvmf_tgt_br2" 01:29:27.339 05:24:09 nvmf_dif -- nvmf/common.sh@169 -- # true 01:29:27.339 05:24:09 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:29:27.339 Cannot find device "nvmf_br" 01:29:27.339 05:24:09 nvmf_dif -- nvmf/common.sh@170 -- # true 01:29:27.339 05:24:09 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:29:27.339 Cannot find device "nvmf_init_if" 01:29:27.339 05:24:09 nvmf_dif -- nvmf/common.sh@171 -- # true 01:29:27.339 05:24:09 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:29:27.339 Cannot find device "nvmf_init_if2" 01:29:27.339 05:24:09 nvmf_dif -- nvmf/common.sh@172 -- # true 01:29:27.339 05:24:09 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:29:27.339 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:29:27.339 05:24:09 nvmf_dif -- nvmf/common.sh@173 -- # true 01:29:27.339 05:24:09 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:29:27.339 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:29:27.339 05:24:09 nvmf_dif -- nvmf/common.sh@174 -- # true 01:29:27.339 05:24:09 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:29:27.339 05:24:09 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:29:27.339 05:24:09 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:29:27.339 05:24:09 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:29:27.339 05:24:09 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:29:27.339 05:24:09 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:29:27.339 05:24:09 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:29:27.339 05:24:09 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:29:27.339 05:24:09 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:29:27.339 05:24:09 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:29:27.339 05:24:09 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:29:27.339 05:24:09 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:29:27.339 05:24:09 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:29:27.339 05:24:09 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:29:27.339 05:24:09 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:29:27.598 05:24:09 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:29:27.598 05:24:09 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:29:27.598 05:24:09 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:29:27.598 05:24:09 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:29:27.598 05:24:09 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:29:27.598 05:24:09 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:29:27.598 05:24:09 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:29:27.598 05:24:09 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:29:27.598 05:24:09 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:29:27.598 05:24:09 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:29:27.598 05:24:09 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:29:27.598 05:24:09 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:29:27.598 05:24:09 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:29:27.598 05:24:09 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:29:27.598 05:24:09 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:29:27.598 05:24:09 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:29:27.598 05:24:09 nvmf_dif -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:29:27.598 05:24:09 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:29:27.598 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:29:27.598 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.109 ms 01:29:27.598 01:29:27.598 --- 10.0.0.3 ping statistics --- 01:29:27.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:29:27.598 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 01:29:27.598 05:24:09 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:29:27.598 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:29:27.598 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.093 ms 01:29:27.598 01:29:27.598 --- 10.0.0.4 ping statistics --- 01:29:27.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:29:27.598 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 01:29:27.598 05:24:09 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:29:27.598 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:29:27.598 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 01:29:27.598 01:29:27.598 --- 10.0.0.1 ping statistics --- 01:29:27.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:29:27.598 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 01:29:27.598 05:24:09 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:29:27.598 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:29:27.598 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 01:29:27.598 01:29:27.598 --- 10.0.0.2 ping statistics --- 01:29:27.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:29:27.598 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 01:29:27.598 05:24:09 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:29:27.598 05:24:09 nvmf_dif -- nvmf/common.sh@461 -- # return 0 01:29:27.598 05:24:09 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 01:29:27.598 05:24:09 nvmf_dif -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:29:28.166 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:29:28.166 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 01:29:28.166 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 01:29:28.166 05:24:10 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:29:28.166 05:24:10 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:29:28.166 05:24:10 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:29:28.166 05:24:10 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:29:28.166 05:24:10 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:29:28.166 05:24:10 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:29:28.166 05:24:10 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 01:29:28.166 05:24:10 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 01:29:28.166 05:24:10 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:29:28.166 05:24:10 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 01:29:28.166 05:24:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:29:28.166 05:24:10 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=82887 01:29:28.166 05:24:10 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 01:29:28.166 05:24:10 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 82887 01:29:28.166 05:24:10 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 82887 ']' 01:29:28.166 05:24:10 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:29:28.166 05:24:10 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 01:29:28.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:29:28.166 05:24:10 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:29:28.166 05:24:10 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 01:29:28.166 05:24:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:29:28.166 [2024-12-09 05:24:10.553689] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:29:28.166 [2024-12-09 05:24:10.553752] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:29:28.426 [2024-12-09 05:24:10.706364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:29:28.426 [2024-12-09 05:24:10.752997] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:29:28.426 [2024-12-09 05:24:10.753039] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:29:28.426 [2024-12-09 05:24:10.753046] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:29:28.426 [2024-12-09 05:24:10.753050] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:29:28.426 [2024-12-09 05:24:10.753055] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:29:28.426 [2024-12-09 05:24:10.753395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:29:28.426 [2024-12-09 05:24:10.794877] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:29:28.994 05:24:11 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:29:28.994 05:24:11 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 01:29:28.994 05:24:11 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:29:28.994 05:24:11 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 01:29:28.994 05:24:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:29:29.253 05:24:11 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:29:29.253 05:24:11 nvmf_dif -- target/dif.sh@139 -- # create_transport 01:29:29.253 05:24:11 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 01:29:29.253 05:24:11 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:29.253 05:24:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:29:29.253 [2024-12-09 05:24:11.481215] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:29:29.253 05:24:11 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:29.253 05:24:11 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 01:29:29.253 05:24:11 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:29:29.253 05:24:11 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 01:29:29.253 05:24:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:29:29.253 ************************************ 01:29:29.253 START TEST fio_dif_1_default 01:29:29.253 ************************************ 01:29:29.253 05:24:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 01:29:29.253 05:24:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 01:29:29.253 05:24:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 01:29:29.253 05:24:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 01:29:29.253 05:24:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 01:29:29.253 05:24:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 01:29:29.253 05:24:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 01:29:29.253 05:24:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:29.253 05:24:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 01:29:29.253 bdev_null0 01:29:29.253 05:24:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:29.253 05:24:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 01:29:29.253 05:24:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:29.253 05:24:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 01:29:29.253 05:24:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:29.253 05:24:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 01:29:29.253 05:24:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:29.253 05:24:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 01:29:29.253 05:24:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:29.253 05:24:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 01:29:29.253 05:24:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:29.253 05:24:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 01:29:29.253 [2024-12-09 05:24:11.545186] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:29:29.253 05:24:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:29.253 05:24:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 01:29:29.253 05:24:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 01:29:29.253 05:24:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 01:29:29.253 05:24:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 01:29:29.253 05:24:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 01:29:29.253 05:24:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:29:29.253 05:24:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:29:29.253 { 01:29:29.253 "params": { 01:29:29.253 "name": "Nvme$subsystem", 01:29:29.253 "trtype": "$TEST_TRANSPORT", 01:29:29.253 "traddr": "$NVMF_FIRST_TARGET_IP", 01:29:29.253 "adrfam": "ipv4", 01:29:29.253 "trsvcid": "$NVMF_PORT", 01:29:29.253 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:29:29.253 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:29:29.253 "hdgst": ${hdgst:-false}, 01:29:29.253 "ddgst": ${ddgst:-false} 01:29:29.253 }, 01:29:29.253 "method": "bdev_nvme_attach_controller" 01:29:29.253 } 01:29:29.253 EOF 01:29:29.253 )") 01:29:29.253 05:24:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:29:29.253 05:24:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 01:29:29.253 05:24:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 01:29:29.253 05:24:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:29:29.253 05:24:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 01:29:29.253 05:24:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:29:29.253 05:24:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:29:29.253 05:24:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 01:29:29.253 05:24:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 01:29:29.253 05:24:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:29:29.253 05:24:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 01:29:29.253 05:24:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 01:29:29.253 05:24:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:29:29.253 05:24:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 01:29:29.253 05:24:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 01:29:29.253 05:24:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:29:29.253 05:24:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 01:29:29.253 05:24:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 01:29:29.253 05:24:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:29:29.253 05:24:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 01:29:29.253 05:24:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:29:29.253 "params": { 01:29:29.253 "name": "Nvme0", 01:29:29.253 "trtype": "tcp", 01:29:29.253 "traddr": "10.0.0.3", 01:29:29.253 "adrfam": "ipv4", 01:29:29.253 "trsvcid": "4420", 01:29:29.253 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:29:29.253 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:29:29.253 "hdgst": false, 01:29:29.253 "ddgst": false 01:29:29.253 }, 01:29:29.253 "method": "bdev_nvme_attach_controller" 01:29:29.253 }' 01:29:29.253 05:24:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 01:29:29.253 05:24:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:29:29.253 05:24:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:29:29.253 05:24:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:29:29.253 05:24:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 01:29:29.253 05:24:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:29:29.253 05:24:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 01:29:29.253 05:24:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:29:29.253 05:24:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 01:29:29.253 05:24:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:29:29.515 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 01:29:29.515 fio-3.35 01:29:29.515 Starting 1 thread 01:29:41.747 01:29:41.747 filename0: (groupid=0, jobs=1): err= 0: pid=82953: Mon Dec 9 05:24:22 2024 01:29:41.747 read: IOPS=12.2k, BW=47.6MiB/s (49.9MB/s)(476MiB/10001msec) 01:29:41.747 slat (nsec): min=5490, max=65534, avg=6061.88, stdev=1363.96 01:29:41.747 clat (usec): min=273, max=1031, avg=311.82, stdev=28.61 01:29:41.747 lat (usec): min=278, max=1037, avg=317.88, stdev=29.10 01:29:41.747 clat percentiles (usec): 01:29:41.747 | 1.00th=[ 281], 5.00th=[ 285], 10.00th=[ 289], 20.00th=[ 293], 01:29:41.747 | 30.00th=[ 297], 40.00th=[ 302], 50.00th=[ 306], 60.00th=[ 310], 01:29:41.747 | 70.00th=[ 314], 80.00th=[ 322], 90.00th=[ 343], 95.00th=[ 371], 01:29:41.747 | 99.00th=[ 420], 99.50th=[ 445], 99.90th=[ 515], 99.95th=[ 553], 01:29:41.747 | 99.99th=[ 627] 01:29:41.747 bw ( KiB/s): min=47264, max=49696, per=100.00%, avg=48724.21, stdev=667.43, samples=19 01:29:41.747 iops : min=11816, max=12424, avg=12181.05, stdev=166.86, samples=19 01:29:41.747 lat (usec) : 500=99.88%, 750=0.12%, 1000=0.01% 01:29:41.747 lat (msec) : 2=0.01% 01:29:41.747 cpu : usr=86.62%, sys=12.17%, ctx=21, majf=0, minf=0 01:29:41.747 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:29:41.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:29:41.747 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:29:41.747 issued rwts: total=121760,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:29:41.747 latency : target=0, window=0, percentile=100.00%, depth=4 01:29:41.747 01:29:41.747 Run status group 0 (all jobs): 01:29:41.747 READ: bw=47.6MiB/s (49.9MB/s), 47.6MiB/s-47.6MiB/s (49.9MB/s-49.9MB/s), io=476MiB (499MB), run=10001-10001msec 01:29:41.747 05:24:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 01:29:41.747 05:24:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 01:29:41.747 05:24:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 01:29:41.747 05:24:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 01:29:41.747 05:24:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 01:29:41.747 05:24:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:29:41.747 05:24:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:41.747 05:24:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 01:29:41.747 05:24:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:41.747 05:24:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 01:29:41.747 05:24:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:41.747 05:24:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 01:29:41.747 05:24:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:41.747 01:29:41.747 real 0m11.077s 01:29:41.747 user 0m9.371s 01:29:41.747 sys 0m1.526s 01:29:41.747 05:24:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 01:29:41.747 05:24:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 01:29:41.747 ************************************ 01:29:41.747 END TEST fio_dif_1_default 01:29:41.747 ************************************ 01:29:41.747 05:24:22 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 01:29:41.747 05:24:22 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:29:41.747 05:24:22 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 01:29:41.747 05:24:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:29:41.747 ************************************ 01:29:41.747 START TEST fio_dif_1_multi_subsystems 01:29:41.747 ************************************ 01:29:41.747 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 01:29:41.747 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 01:29:41.747 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 01:29:41.747 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 01:29:41.747 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 01:29:41.747 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 01:29:41.747 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 01:29:41.747 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 01:29:41.747 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:41.747 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:29:41.747 bdev_null0 01:29:41.747 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:41.747 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 01:29:41.747 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:41.747 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:29:41.747 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:41.747 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 01:29:41.747 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:41.747 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:29:41.747 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:41.747 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 01:29:41.747 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:41.748 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:29:41.748 [2024-12-09 05:24:22.686284] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:29:41.748 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:41.748 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 01:29:41.748 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 01:29:41.748 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 01:29:41.748 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 01:29:41.748 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:41.748 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:29:41.748 bdev_null1 01:29:41.748 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:41.748 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 01:29:41.748 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:41.748 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:29:41.748 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:41.748 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 01:29:41.748 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:41.748 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:29:41.748 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:41.748 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:29:41.748 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:41.748 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:29:41.748 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:41.748 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 01:29:41.748 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 01:29:41.748 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 01:29:41.748 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 01:29:41.748 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:29:41.748 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 01:29:41.748 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:29:41.748 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 01:29:41.748 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:29:41.748 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:29:41.748 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 01:29:41.748 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:29:41.748 { 01:29:41.748 "params": { 01:29:41.748 "name": "Nvme$subsystem", 01:29:41.748 "trtype": "$TEST_TRANSPORT", 01:29:41.748 "traddr": "$NVMF_FIRST_TARGET_IP", 01:29:41.748 "adrfam": "ipv4", 01:29:41.748 "trsvcid": "$NVMF_PORT", 01:29:41.748 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:29:41.748 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:29:41.748 "hdgst": ${hdgst:-false}, 01:29:41.748 "ddgst": ${ddgst:-false} 01:29:41.748 }, 01:29:41.748 "method": "bdev_nvme_attach_controller" 01:29:41.748 } 01:29:41.748 EOF 01:29:41.748 )") 01:29:41.748 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:29:41.748 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 01:29:41.748 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 01:29:41.748 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:29:41.748 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 01:29:41.748 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 01:29:41.748 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:29:41.748 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 01:29:41.748 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 01:29:41.748 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 01:29:41.748 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:29:41.748 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 01:29:41.748 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 01:29:41.748 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:29:41.748 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:29:41.748 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 01:29:41.748 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:29:41.748 { 01:29:41.748 "params": { 01:29:41.748 "name": "Nvme$subsystem", 01:29:41.748 "trtype": "$TEST_TRANSPORT", 01:29:41.748 "traddr": "$NVMF_FIRST_TARGET_IP", 01:29:41.748 "adrfam": "ipv4", 01:29:41.748 "trsvcid": "$NVMF_PORT", 01:29:41.748 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:29:41.748 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:29:41.748 "hdgst": ${hdgst:-false}, 01:29:41.748 "ddgst": ${ddgst:-false} 01:29:41.748 }, 01:29:41.748 "method": "bdev_nvme_attach_controller" 01:29:41.748 } 01:29:41.748 EOF 01:29:41.748 )") 01:29:41.748 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 01:29:41.748 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 01:29:41.748 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 01:29:41.748 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 01:29:41.748 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:29:41.748 "params": { 01:29:41.748 "name": "Nvme0", 01:29:41.748 "trtype": "tcp", 01:29:41.748 "traddr": "10.0.0.3", 01:29:41.748 "adrfam": "ipv4", 01:29:41.748 "trsvcid": "4420", 01:29:41.748 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:29:41.748 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:29:41.748 "hdgst": false, 01:29:41.748 "ddgst": false 01:29:41.748 }, 01:29:41.748 "method": "bdev_nvme_attach_controller" 01:29:41.748 },{ 01:29:41.748 "params": { 01:29:41.748 "name": "Nvme1", 01:29:41.748 "trtype": "tcp", 01:29:41.748 "traddr": "10.0.0.3", 01:29:41.748 "adrfam": "ipv4", 01:29:41.748 "trsvcid": "4420", 01:29:41.748 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:29:41.748 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:29:41.748 "hdgst": false, 01:29:41.748 "ddgst": false 01:29:41.748 }, 01:29:41.748 "method": "bdev_nvme_attach_controller" 01:29:41.748 }' 01:29:41.748 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 01:29:41.748 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:29:41.748 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:29:41.748 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:29:41.748 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 01:29:41.748 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:29:41.748 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 01:29:41.748 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:29:41.748 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 01:29:41.748 05:24:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:29:41.748 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 01:29:41.748 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 01:29:41.748 fio-3.35 01:29:41.748 Starting 2 threads 01:29:51.748 01:29:51.748 filename0: (groupid=0, jobs=1): err= 0: pid=83113: Mon Dec 9 05:24:33 2024 01:29:51.748 read: IOPS=5923, BW=23.1MiB/s (24.3MB/s)(231MiB/10001msec) 01:29:51.748 slat (usec): min=5, max=392, avg=13.29, stdev= 9.16 01:29:51.748 clat (usec): min=307, max=3755, avg=634.72, stdev=135.54 01:29:51.748 lat (usec): min=316, max=3788, avg=648.00, stdev=137.16 01:29:51.748 clat percentiles (usec): 01:29:51.748 | 1.00th=[ 529], 5.00th=[ 553], 10.00th=[ 570], 20.00th=[ 586], 01:29:51.748 | 30.00th=[ 594], 40.00th=[ 603], 50.00th=[ 611], 60.00th=[ 627], 01:29:51.748 | 70.00th=[ 635], 80.00th=[ 660], 90.00th=[ 734], 95.00th=[ 766], 01:29:51.748 | 99.00th=[ 840], 99.50th=[ 1037], 99.90th=[ 2638], 99.95th=[ 2704], 01:29:51.748 | 99.99th=[ 2802] 01:29:51.748 bw ( KiB/s): min=19648, max=26144, per=50.38%, avg=23856.84, stdev=2064.55, samples=19 01:29:51.748 iops : min= 4912, max= 6536, avg=5964.21, stdev=516.14, samples=19 01:29:51.748 lat (usec) : 500=0.24%, 750=92.55%, 1000=6.67% 01:29:51.748 lat (msec) : 2=0.21%, 4=0.34% 01:29:51.748 cpu : usr=94.59%, sys=4.37%, ctx=44, majf=0, minf=0 01:29:51.748 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:29:51.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:29:51.748 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:29:51.748 issued rwts: total=59236,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:29:51.748 latency : target=0, window=0, percentile=100.00%, depth=4 01:29:51.748 filename1: (groupid=0, jobs=1): err= 0: pid=83114: Mon Dec 9 05:24:33 2024 01:29:51.748 read: IOPS=5915, BW=23.1MiB/s (24.2MB/s)(231MiB/10001msec) 01:29:51.748 slat (usec): min=5, max=441, avg=11.01, stdev= 6.01 01:29:51.748 clat (usec): min=290, max=3159, avg=644.07, stdev=137.30 01:29:51.748 lat (usec): min=295, max=3175, avg=655.08, stdev=138.80 01:29:51.748 clat percentiles (usec): 01:29:51.748 | 1.00th=[ 545], 5.00th=[ 562], 10.00th=[ 570], 20.00th=[ 586], 01:29:51.748 | 30.00th=[ 594], 40.00th=[ 603], 50.00th=[ 619], 60.00th=[ 635], 01:29:51.748 | 70.00th=[ 652], 80.00th=[ 676], 90.00th=[ 742], 95.00th=[ 783], 01:29:51.748 | 99.00th=[ 865], 99.50th=[ 1090], 99.90th=[ 2638], 99.95th=[ 2704], 01:29:51.748 | 99.99th=[ 2802] 01:29:51.748 bw ( KiB/s): min=19360, max=26144, per=50.31%, avg=23823.16, stdev=2096.19, samples=19 01:29:51.748 iops : min= 4840, max= 6536, avg=5955.79, stdev=524.05, samples=19 01:29:51.748 lat (usec) : 500=0.05%, 750=90.97%, 1000=8.37% 01:29:51.748 lat (msec) : 2=0.25%, 4=0.36% 01:29:51.748 cpu : usr=92.71%, sys=6.21%, ctx=28, majf=0, minf=0 01:29:51.748 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:29:51.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:29:51.748 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:29:51.748 issued rwts: total=59156,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:29:51.748 latency : target=0, window=0, percentile=100.00%, depth=4 01:29:51.748 01:29:51.748 Run status group 0 (all jobs): 01:29:51.748 READ: bw=46.2MiB/s (48.5MB/s), 23.1MiB/s-23.1MiB/s (24.2MB/s-24.3MB/s), io=462MiB (485MB), run=10001-10001msec 01:29:51.748 05:24:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 01:29:51.748 05:24:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 01:29:51.748 05:24:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 01:29:51.748 05:24:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 01:29:51.748 05:24:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 01:29:51.748 05:24:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:29:51.748 05:24:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:51.748 05:24:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:29:51.748 05:24:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:51.748 05:24:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 01:29:51.748 05:24:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:51.748 05:24:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:29:51.748 05:24:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:51.748 05:24:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 01:29:51.748 05:24:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 01:29:51.748 05:24:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 01:29:51.748 05:24:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:29:51.748 05:24:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:51.748 05:24:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:29:51.748 05:24:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:51.748 05:24:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 01:29:51.749 05:24:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:51.749 05:24:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:29:51.749 05:24:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:51.749 01:29:51.749 real 0m11.210s 01:29:51.749 user 0m19.565s 01:29:51.749 sys 0m1.401s 01:29:51.749 05:24:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 01:29:51.749 05:24:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:29:51.749 ************************************ 01:29:51.749 END TEST fio_dif_1_multi_subsystems 01:29:51.749 ************************************ 01:29:51.749 05:24:33 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 01:29:51.749 05:24:33 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:29:51.749 05:24:33 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 01:29:51.749 05:24:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:29:51.749 ************************************ 01:29:51.749 START TEST fio_dif_rand_params 01:29:51.749 ************************************ 01:29:51.749 05:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 01:29:51.749 05:24:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 01:29:51.749 05:24:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 01:29:51.749 05:24:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 01:29:51.749 05:24:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 01:29:51.749 05:24:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 01:29:51.749 05:24:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 01:29:51.749 05:24:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 01:29:51.749 05:24:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 01:29:51.749 05:24:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 01:29:51.749 05:24:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 01:29:51.749 05:24:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 01:29:51.749 05:24:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 01:29:51.749 05:24:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 01:29:51.749 05:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:51.749 05:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:29:51.749 bdev_null0 01:29:51.749 05:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:51.749 05:24:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 01:29:51.749 05:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:51.749 05:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:29:51.749 05:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:51.749 05:24:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 01:29:51.749 05:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:51.749 05:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:29:51.749 05:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:51.749 05:24:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 01:29:51.749 05:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:51.749 05:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:29:51.749 [2024-12-09 05:24:33.962906] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:29:51.749 05:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:51.749 05:24:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 01:29:51.749 05:24:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 01:29:51.749 05:24:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 01:29:51.749 05:24:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 01:29:51.749 05:24:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 01:29:51.749 05:24:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:29:51.749 05:24:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:29:51.749 { 01:29:51.749 "params": { 01:29:51.749 "name": "Nvme$subsystem", 01:29:51.749 "trtype": "$TEST_TRANSPORT", 01:29:51.749 "traddr": "$NVMF_FIRST_TARGET_IP", 01:29:51.749 "adrfam": "ipv4", 01:29:51.749 "trsvcid": "$NVMF_PORT", 01:29:51.749 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:29:51.749 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:29:51.749 "hdgst": ${hdgst:-false}, 01:29:51.749 "ddgst": ${ddgst:-false} 01:29:51.749 }, 01:29:51.749 "method": "bdev_nvme_attach_controller" 01:29:51.749 } 01:29:51.749 EOF 01:29:51.749 )") 01:29:51.749 05:24:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:29:51.749 05:24:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 01:29:51.749 05:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:29:51.749 05:24:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 01:29:51.749 05:24:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 01:29:51.749 05:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:29:51.749 05:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:29:51.749 05:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 01:29:51.749 05:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:29:51.749 05:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 01:29:51.749 05:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 01:29:51.749 05:24:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 01:29:51.749 05:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:29:51.749 05:24:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 01:29:51.749 05:24:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 01:29:51.749 05:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:29:51.749 05:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 01:29:51.749 05:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:29:51.749 05:24:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 01:29:51.749 05:24:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 01:29:51.749 05:24:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:29:51.749 "params": { 01:29:51.749 "name": "Nvme0", 01:29:51.749 "trtype": "tcp", 01:29:51.749 "traddr": "10.0.0.3", 01:29:51.749 "adrfam": "ipv4", 01:29:51.749 "trsvcid": "4420", 01:29:51.749 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:29:51.749 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:29:51.749 "hdgst": false, 01:29:51.749 "ddgst": false 01:29:51.749 }, 01:29:51.749 "method": "bdev_nvme_attach_controller" 01:29:51.749 }' 01:29:51.749 05:24:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 01:29:51.749 05:24:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:29:51.749 05:24:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:29:51.749 05:24:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:29:51.749 05:24:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 01:29:51.749 05:24:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:29:51.749 05:24:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 01:29:51.749 05:24:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:29:51.749 05:24:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 01:29:51.749 05:24:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:29:52.008 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 01:29:52.008 ... 01:29:52.008 fio-3.35 01:29:52.008 Starting 3 threads 01:29:58.590 01:29:58.590 filename0: (groupid=0, jobs=1): err= 0: pid=83279: Mon Dec 9 05:24:39 2024 01:29:58.590 read: IOPS=273, BW=34.1MiB/s (35.8MB/s)(171MiB/5008msec) 01:29:58.590 slat (nsec): min=5915, max=95322, avg=14763.79, stdev=6677.44 01:29:58.590 clat (usec): min=4282, max=12259, avg=10949.22, stdev=551.46 01:29:58.590 lat (usec): min=4294, max=12286, avg=10963.99, stdev=551.80 01:29:58.590 clat percentiles (usec): 01:29:58.590 | 1.00th=[10159], 5.00th=[10290], 10.00th=[10421], 20.00th=[10552], 01:29:58.590 | 30.00th=[10683], 40.00th=[10814], 50.00th=[10945], 60.00th=[11076], 01:29:58.590 | 70.00th=[11207], 80.00th=[11338], 90.00th=[11469], 95.00th=[11731], 01:29:58.590 | 99.00th=[12125], 99.50th=[12125], 99.90th=[12256], 99.95th=[12256], 01:29:58.590 | 99.99th=[12256] 01:29:58.590 bw ( KiB/s): min=33724, max=36096, per=33.40%, avg=34937.20, stdev=983.94, samples=10 01:29:58.590 iops : min= 263, max= 282, avg=272.90, stdev= 7.75, samples=10 01:29:58.590 lat (msec) : 10=0.66%, 20=99.34% 01:29:58.590 cpu : usr=92.47%, sys=7.05%, ctx=5, majf=0, minf=0 01:29:58.590 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:29:58.590 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:29:58.590 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:29:58.590 issued rwts: total=1368,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:29:58.590 latency : target=0, window=0, percentile=100.00%, depth=3 01:29:58.590 filename0: (groupid=0, jobs=1): err= 0: pid=83280: Mon Dec 9 05:24:39 2024 01:29:58.590 read: IOPS=272, BW=34.0MiB/s (35.7MB/s)(170MiB/5001msec) 01:29:58.590 slat (nsec): min=6339, max=74367, avg=17641.01, stdev=10569.04 01:29:58.590 clat (usec): min=10130, max=12307, avg=10974.46, stdev=418.67 01:29:58.590 lat (usec): min=10142, max=12336, avg=10992.10, stdev=421.22 01:29:58.590 clat percentiles (usec): 01:29:58.590 | 1.00th=[10290], 5.00th=[10290], 10.00th=[10421], 20.00th=[10552], 01:29:58.590 | 30.00th=[10683], 40.00th=[10814], 50.00th=[10945], 60.00th=[11076], 01:29:58.590 | 70.00th=[11207], 80.00th=[11338], 90.00th=[11469], 95.00th=[11731], 01:29:58.590 | 99.00th=[12125], 99.50th=[12125], 99.90th=[12256], 99.95th=[12256], 01:29:58.590 | 99.99th=[12256] 01:29:58.590 bw ( KiB/s): min=33724, max=36096, per=33.28%, avg=34808.44, stdev=1094.34, samples=9 01:29:58.590 iops : min= 263, max= 282, avg=271.89, stdev= 8.61, samples=9 01:29:58.590 lat (msec) : 20=100.00% 01:29:58.590 cpu : usr=93.68%, sys=5.86%, ctx=11, majf=0, minf=0 01:29:58.590 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:29:58.590 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:29:58.590 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:29:58.590 issued rwts: total=1362,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:29:58.590 latency : target=0, window=0, percentile=100.00%, depth=3 01:29:58.590 filename0: (groupid=0, jobs=1): err= 0: pid=83281: Mon Dec 9 05:24:39 2024 01:29:58.590 read: IOPS=272, BW=34.0MiB/s (35.7MB/s)(170MiB/5001msec) 01:29:58.590 slat (nsec): min=5961, max=74305, avg=17865.50, stdev=10259.47 01:29:58.590 clat (usec): min=10132, max=12270, avg=10971.76, stdev=418.10 01:29:58.590 lat (usec): min=10144, max=12328, avg=10989.62, stdev=420.62 01:29:58.590 clat percentiles (usec): 01:29:58.590 | 1.00th=[10290], 5.00th=[10290], 10.00th=[10421], 20.00th=[10552], 01:29:58.590 | 30.00th=[10683], 40.00th=[10814], 50.00th=[10945], 60.00th=[11076], 01:29:58.590 | 70.00th=[11207], 80.00th=[11338], 90.00th=[11600], 95.00th=[11731], 01:29:58.590 | 99.00th=[12125], 99.50th=[12125], 99.90th=[12256], 99.95th=[12256], 01:29:58.590 | 99.99th=[12256] 01:29:58.590 bw ( KiB/s): min=33724, max=36096, per=33.28%, avg=34808.44, stdev=1094.34, samples=9 01:29:58.590 iops : min= 263, max= 282, avg=271.89, stdev= 8.61, samples=9 01:29:58.590 lat (msec) : 20=100.00% 01:29:58.590 cpu : usr=93.26%, sys=6.16%, ctx=132, majf=0, minf=0 01:29:58.590 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:29:58.590 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:29:58.590 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:29:58.590 issued rwts: total=1362,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:29:58.590 latency : target=0, window=0, percentile=100.00%, depth=3 01:29:58.590 01:29:58.590 Run status group 0 (all jobs): 01:29:58.590 READ: bw=102MiB/s (107MB/s), 34.0MiB/s-34.1MiB/s (35.7MB/s-35.8MB/s), io=512MiB (536MB), run=5001-5008msec 01:29:58.590 05:24:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 01:29:58.590 05:24:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 01:29:58.590 05:24:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 01:29:58.590 05:24:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 01:29:58.590 05:24:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 01:29:58.590 05:24:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:29:58.590 05:24:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:58.590 05:24:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:29:58.590 05:24:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:58.590 05:24:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 01:29:58.590 05:24:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:58.590 05:24:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:29:58.590 05:24:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:58.590 05:24:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 01:29:58.590 05:24:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 01:29:58.590 05:24:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 01:29:58.590 05:24:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 01:29:58.590 05:24:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 01:29:58.590 05:24:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 01:29:58.590 05:24:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 01:29:58.590 05:24:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 01:29:58.590 05:24:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 01:29:58.590 05:24:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 01:29:58.590 05:24:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 01:29:58.590 05:24:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 01:29:58.590 05:24:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:58.590 05:24:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:29:58.590 bdev_null0 01:29:58.590 05:24:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:58.590 05:24:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 01:29:58.590 05:24:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:58.590 05:24:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:29:58.590 05:24:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:29:58.591 [2024-12-09 05:24:40.057219] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:29:58.591 bdev_null1 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:29:58.591 bdev_null2 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:29:58.591 { 01:29:58.591 "params": { 01:29:58.591 "name": "Nvme$subsystem", 01:29:58.591 "trtype": "$TEST_TRANSPORT", 01:29:58.591 "traddr": "$NVMF_FIRST_TARGET_IP", 01:29:58.591 "adrfam": "ipv4", 01:29:58.591 "trsvcid": "$NVMF_PORT", 01:29:58.591 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:29:58.591 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:29:58.591 "hdgst": ${hdgst:-false}, 01:29:58.591 "ddgst": ${ddgst:-false} 01:29:58.591 }, 01:29:58.591 "method": "bdev_nvme_attach_controller" 01:29:58.591 } 01:29:58.591 EOF 01:29:58.591 )") 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:29:58.591 { 01:29:58.591 "params": { 01:29:58.591 "name": "Nvme$subsystem", 01:29:58.591 "trtype": "$TEST_TRANSPORT", 01:29:58.591 "traddr": "$NVMF_FIRST_TARGET_IP", 01:29:58.591 "adrfam": "ipv4", 01:29:58.591 "trsvcid": "$NVMF_PORT", 01:29:58.591 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:29:58.591 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:29:58.591 "hdgst": ${hdgst:-false}, 01:29:58.591 "ddgst": ${ddgst:-false} 01:29:58.591 }, 01:29:58.591 "method": "bdev_nvme_attach_controller" 01:29:58.591 } 01:29:58.591 EOF 01:29:58.591 )") 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:29:58.591 { 01:29:58.591 "params": { 01:29:58.591 "name": "Nvme$subsystem", 01:29:58.591 "trtype": "$TEST_TRANSPORT", 01:29:58.591 "traddr": "$NVMF_FIRST_TARGET_IP", 01:29:58.591 "adrfam": "ipv4", 01:29:58.591 "trsvcid": "$NVMF_PORT", 01:29:58.591 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:29:58.591 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:29:58.591 "hdgst": ${hdgst:-false}, 01:29:58.591 "ddgst": ${ddgst:-false} 01:29:58.591 }, 01:29:58.591 "method": "bdev_nvme_attach_controller" 01:29:58.591 } 01:29:58.591 EOF 01:29:58.591 )") 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:29:58.591 "params": { 01:29:58.591 "name": "Nvme0", 01:29:58.591 "trtype": "tcp", 01:29:58.591 "traddr": "10.0.0.3", 01:29:58.591 "adrfam": "ipv4", 01:29:58.591 "trsvcid": "4420", 01:29:58.591 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:29:58.591 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:29:58.591 "hdgst": false, 01:29:58.591 "ddgst": false 01:29:58.591 }, 01:29:58.591 "method": "bdev_nvme_attach_controller" 01:29:58.591 },{ 01:29:58.591 "params": { 01:29:58.591 "name": "Nvme1", 01:29:58.591 "trtype": "tcp", 01:29:58.591 "traddr": "10.0.0.3", 01:29:58.591 "adrfam": "ipv4", 01:29:58.591 "trsvcid": "4420", 01:29:58.591 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:29:58.591 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:29:58.591 "hdgst": false, 01:29:58.591 "ddgst": false 01:29:58.591 }, 01:29:58.591 "method": "bdev_nvme_attach_controller" 01:29:58.591 },{ 01:29:58.591 "params": { 01:29:58.591 "name": "Nvme2", 01:29:58.591 "trtype": "tcp", 01:29:58.591 "traddr": "10.0.0.3", 01:29:58.591 "adrfam": "ipv4", 01:29:58.591 "trsvcid": "4420", 01:29:58.591 "subnqn": "nqn.2016-06.io.spdk:cnode2", 01:29:58.591 "hostnqn": "nqn.2016-06.io.spdk:host2", 01:29:58.591 "hdgst": false, 01:29:58.591 "ddgst": false 01:29:58.591 }, 01:29:58.591 "method": "bdev_nvme_attach_controller" 01:29:58.591 }' 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 01:29:58.591 05:24:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:29:58.591 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 01:29:58.591 ... 01:29:58.591 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 01:29:58.591 ... 01:29:58.591 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 01:29:58.591 ... 01:29:58.591 fio-3.35 01:29:58.591 Starting 24 threads 01:30:10.788 01:30:10.788 filename0: (groupid=0, jobs=1): err= 0: pid=83381: Mon Dec 9 05:24:51 2024 01:30:10.788 read: IOPS=256, BW=1028KiB/s (1053kB/s)(10.1MiB/10020msec) 01:30:10.788 slat (usec): min=2, max=8020, avg=26.16, stdev=257.49 01:30:10.788 clat (msec): min=18, max=114, avg=62.14, stdev=16.08 01:30:10.788 lat (msec): min=18, max=114, avg=62.16, stdev=16.08 01:30:10.788 clat percentiles (msec): 01:30:10.788 | 1.00th=[ 28], 5.00th=[ 39], 10.00th=[ 44], 20.00th=[ 48], 01:30:10.788 | 30.00th=[ 52], 40.00th=[ 59], 50.00th=[ 63], 60.00th=[ 67], 01:30:10.788 | 70.00th=[ 70], 80.00th=[ 73], 90.00th=[ 81], 95.00th=[ 92], 01:30:10.788 | 99.00th=[ 110], 99.50th=[ 111], 99.90th=[ 115], 99.95th=[ 115], 01:30:10.788 | 99.99th=[ 115] 01:30:10.788 bw ( KiB/s): min= 872, max= 1152, per=4.15%, avg=1021.95, stdev=67.01, samples=19 01:30:10.788 iops : min= 218, max= 288, avg=255.47, stdev=16.73, samples=19 01:30:10.788 lat (msec) : 20=0.12%, 50=27.46%, 100=69.55%, 250=2.87% 01:30:10.788 cpu : usr=40.69%, sys=1.42%, ctx=1285, majf=0, minf=9 01:30:10.788 IO depths : 1=0.1%, 2=1.2%, 4=4.8%, 8=78.7%, 16=15.3%, 32=0.0%, >=64=0.0% 01:30:10.788 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:30:10.788 complete : 0=0.0%, 4=88.2%, 8=10.7%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:30:10.788 issued rwts: total=2575,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:30:10.788 latency : target=0, window=0, percentile=100.00%, depth=16 01:30:10.788 filename0: (groupid=0, jobs=1): err= 0: pid=83382: Mon Dec 9 05:24:51 2024 01:30:10.788 read: IOPS=264, BW=1060KiB/s (1085kB/s)(10.4MiB/10005msec) 01:30:10.788 slat (usec): min=2, max=7183, avg=22.45, stdev=181.39 01:30:10.788 clat (msec): min=6, max=115, avg=60.28, stdev=16.64 01:30:10.788 lat (msec): min=6, max=115, avg=60.30, stdev=16.64 01:30:10.788 clat percentiles (msec): 01:30:10.788 | 1.00th=[ 24], 5.00th=[ 38], 10.00th=[ 42], 20.00th=[ 46], 01:30:10.788 | 30.00th=[ 48], 40.00th=[ 56], 50.00th=[ 62], 60.00th=[ 66], 01:30:10.788 | 70.00th=[ 70], 80.00th=[ 72], 90.00th=[ 80], 95.00th=[ 90], 01:30:10.788 | 99.00th=[ 105], 99.50th=[ 108], 99.90th=[ 116], 99.95th=[ 116], 01:30:10.788 | 99.99th=[ 116] 01:30:10.788 bw ( KiB/s): min= 896, max= 1192, per=4.26%, avg=1049.74, stdev=87.80, samples=19 01:30:10.788 iops : min= 224, max= 298, avg=262.42, stdev=21.94, samples=19 01:30:10.788 lat (msec) : 10=0.38%, 20=0.49%, 50=32.59%, 100=64.24%, 250=2.30% 01:30:10.788 cpu : usr=41.83%, sys=1.37%, ctx=1416, majf=0, minf=9 01:30:10.788 IO depths : 1=0.1%, 2=0.7%, 4=2.8%, 8=81.0%, 16=15.4%, 32=0.0%, >=64=0.0% 01:30:10.788 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:30:10.788 complete : 0=0.0%, 4=87.5%, 8=11.9%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 01:30:10.788 issued rwts: total=2651,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:30:10.788 latency : target=0, window=0, percentile=100.00%, depth=16 01:30:10.788 filename0: (groupid=0, jobs=1): err= 0: pid=83383: Mon Dec 9 05:24:51 2024 01:30:10.788 read: IOPS=247, BW=990KiB/s (1014kB/s)(9956KiB/10052msec) 01:30:10.788 slat (usec): min=6, max=8020, avg=23.27, stdev=277.88 01:30:10.788 clat (msec): min=7, max=129, avg=64.42, stdev=18.73 01:30:10.788 lat (msec): min=7, max=129, avg=64.44, stdev=18.73 01:30:10.788 clat percentiles (msec): 01:30:10.788 | 1.00th=[ 13], 5.00th=[ 24], 10.00th=[ 45], 20.00th=[ 48], 01:30:10.788 | 30.00th=[ 59], 40.00th=[ 62], 50.00th=[ 67], 60.00th=[ 71], 01:30:10.788 | 70.00th=[ 72], 80.00th=[ 79], 90.00th=[ 86], 95.00th=[ 95], 01:30:10.788 | 99.00th=[ 109], 99.50th=[ 110], 99.90th=[ 127], 99.95th=[ 129], 01:30:10.788 | 99.99th=[ 130] 01:30:10.788 bw ( KiB/s): min= 800, max= 1792, per=4.01%, avg=988.90, stdev=201.12, samples=20 01:30:10.788 iops : min= 200, max= 448, avg=247.20, stdev=50.28, samples=20 01:30:10.788 lat (msec) : 10=0.56%, 20=2.57%, 50=19.00%, 100=74.53%, 250=3.33% 01:30:10.788 cpu : usr=34.61%, sys=1.03%, ctx=1030, majf=0, minf=0 01:30:10.788 IO depths : 1=0.1%, 2=1.4%, 4=5.6%, 8=76.9%, 16=16.0%, 32=0.0%, >=64=0.0% 01:30:10.788 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:30:10.788 complete : 0=0.0%, 4=89.2%, 8=9.6%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 01:30:10.788 issued rwts: total=2489,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:30:10.788 latency : target=0, window=0, percentile=100.00%, depth=16 01:30:10.788 filename0: (groupid=0, jobs=1): err= 0: pid=83384: Mon Dec 9 05:24:51 2024 01:30:10.788 read: IOPS=260, BW=1043KiB/s (1068kB/s)(10.2MiB/10025msec) 01:30:10.788 slat (usec): min=2, max=8024, avg=30.28, stdev=298.65 01:30:10.788 clat (msec): min=22, max=112, avg=61.17, stdev=16.07 01:30:10.788 lat (msec): min=22, max=112, avg=61.20, stdev=16.07 01:30:10.788 clat percentiles (msec): 01:30:10.788 | 1.00th=[ 31], 5.00th=[ 38], 10.00th=[ 43], 20.00th=[ 47], 01:30:10.788 | 30.00th=[ 49], 40.00th=[ 57], 50.00th=[ 62], 60.00th=[ 67], 01:30:10.788 | 70.00th=[ 71], 80.00th=[ 73], 90.00th=[ 81], 95.00th=[ 88], 01:30:10.788 | 99.00th=[ 106], 99.50th=[ 110], 99.90th=[ 113], 99.95th=[ 113], 01:30:10.788 | 99.99th=[ 113] 01:30:10.788 bw ( KiB/s): min= 920, max= 1128, per=4.23%, avg=1041.75, stdev=58.87, samples=20 01:30:10.788 iops : min= 230, max= 282, avg=260.40, stdev=14.73, samples=20 01:30:10.788 lat (msec) : 50=32.24%, 100=65.20%, 250=2.56% 01:30:10.788 cpu : usr=38.22%, sys=1.23%, ctx=1151, majf=0, minf=9 01:30:10.788 IO depths : 1=0.1%, 2=0.8%, 4=3.2%, 8=80.6%, 16=15.3%, 32=0.0%, >=64=0.0% 01:30:10.788 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:30:10.788 complete : 0=0.0%, 4=87.6%, 8=11.7%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 01:30:10.788 issued rwts: total=2615,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:30:10.788 latency : target=0, window=0, percentile=100.00%, depth=16 01:30:10.788 filename0: (groupid=0, jobs=1): err= 0: pid=83385: Mon Dec 9 05:24:51 2024 01:30:10.788 read: IOPS=237, BW=950KiB/s (973kB/s)(9548KiB/10050msec) 01:30:10.788 slat (usec): min=4, max=8022, avg=20.91, stdev=231.82 01:30:10.788 clat (msec): min=10, max=133, avg=67.14, stdev=17.88 01:30:10.788 lat (msec): min=10, max=133, avg=67.16, stdev=17.89 01:30:10.788 clat percentiles (msec): 01:30:10.788 | 1.00th=[ 18], 5.00th=[ 37], 10.00th=[ 47], 20.00th=[ 57], 01:30:10.788 | 30.00th=[ 61], 40.00th=[ 64], 50.00th=[ 70], 60.00th=[ 72], 01:30:10.788 | 70.00th=[ 72], 80.00th=[ 80], 90.00th=[ 87], 95.00th=[ 96], 01:30:10.789 | 99.00th=[ 109], 99.50th=[ 126], 99.90th=[ 126], 99.95th=[ 132], 01:30:10.789 | 99.99th=[ 133] 01:30:10.789 bw ( KiB/s): min= 768, max= 1545, per=3.85%, avg=948.05, stdev=158.28, samples=20 01:30:10.789 iops : min= 192, max= 386, avg=237.00, stdev=39.52, samples=20 01:30:10.789 lat (msec) : 20=2.01%, 50=16.13%, 100=77.71%, 250=4.15% 01:30:10.789 cpu : usr=32.23%, sys=1.07%, ctx=898, majf=0, minf=9 01:30:10.789 IO depths : 1=0.1%, 2=2.3%, 4=9.1%, 8=73.0%, 16=15.5%, 32=0.0%, >=64=0.0% 01:30:10.789 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:30:10.789 complete : 0=0.0%, 4=90.3%, 8=7.8%, 16=2.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:30:10.789 issued rwts: total=2387,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:30:10.789 latency : target=0, window=0, percentile=100.00%, depth=16 01:30:10.789 filename0: (groupid=0, jobs=1): err= 0: pid=83386: Mon Dec 9 05:24:51 2024 01:30:10.789 read: IOPS=267, BW=1070KiB/s (1096kB/s)(10.5MiB/10005msec) 01:30:10.789 slat (usec): min=6, max=8025, avg=24.89, stdev=268.11 01:30:10.789 clat (msec): min=4, max=119, avg=59.71, stdev=16.84 01:30:10.789 lat (msec): min=4, max=119, avg=59.73, stdev=16.83 01:30:10.789 clat percentiles (msec): 01:30:10.789 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 40], 20.00th=[ 47], 01:30:10.789 | 30.00th=[ 48], 40.00th=[ 54], 50.00th=[ 61], 60.00th=[ 64], 01:30:10.789 | 70.00th=[ 70], 80.00th=[ 72], 90.00th=[ 81], 95.00th=[ 87], 01:30:10.789 | 99.00th=[ 106], 99.50th=[ 108], 99.90th=[ 120], 99.95th=[ 120], 01:30:10.789 | 99.99th=[ 120] 01:30:10.789 bw ( KiB/s): min= 888, max= 1232, per=4.30%, avg=1059.53, stdev=92.12, samples=19 01:30:10.789 iops : min= 222, max= 308, avg=264.84, stdev=23.05, samples=19 01:30:10.789 lat (msec) : 10=0.67%, 20=0.22%, 50=35.69%, 100=61.17%, 250=2.24% 01:30:10.789 cpu : usr=34.56%, sys=0.86%, ctx=1078, majf=0, minf=9 01:30:10.789 IO depths : 1=0.1%, 2=0.4%, 4=1.6%, 8=82.4%, 16=15.5%, 32=0.0%, >=64=0.0% 01:30:10.789 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:30:10.789 complete : 0=0.0%, 4=87.2%, 8=12.4%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 01:30:10.789 issued rwts: total=2676,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:30:10.789 latency : target=0, window=0, percentile=100.00%, depth=16 01:30:10.789 filename0: (groupid=0, jobs=1): err= 0: pid=83387: Mon Dec 9 05:24:51 2024 01:30:10.789 read: IOPS=245, BW=981KiB/s (1005kB/s)(9848KiB/10039msec) 01:30:10.789 slat (usec): min=3, max=8015, avg=25.64, stdev=228.11 01:30:10.789 clat (msec): min=14, max=119, avg=65.07, stdev=16.16 01:30:10.789 lat (msec): min=14, max=119, avg=65.09, stdev=16.17 01:30:10.789 clat percentiles (msec): 01:30:10.789 | 1.00th=[ 32], 5.00th=[ 40], 10.00th=[ 46], 20.00th=[ 49], 01:30:10.789 | 30.00th=[ 58], 40.00th=[ 62], 50.00th=[ 66], 60.00th=[ 70], 01:30:10.789 | 70.00th=[ 72], 80.00th=[ 77], 90.00th=[ 85], 95.00th=[ 95], 01:30:10.789 | 99.00th=[ 108], 99.50th=[ 110], 99.90th=[ 110], 99.95th=[ 110], 01:30:10.789 | 99.99th=[ 121] 01:30:10.789 bw ( KiB/s): min= 768, max= 1280, per=3.97%, avg=978.40, stdev=98.84, samples=20 01:30:10.789 iops : min= 192, max= 320, avg=244.60, stdev=24.71, samples=20 01:30:10.789 lat (msec) : 20=0.08%, 50=21.89%, 100=75.39%, 250=2.64% 01:30:10.789 cpu : usr=37.77%, sys=1.05%, ctx=1246, majf=0, minf=9 01:30:10.789 IO depths : 1=0.1%, 2=1.3%, 4=5.2%, 8=77.6%, 16=15.8%, 32=0.0%, >=64=0.0% 01:30:10.789 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:30:10.789 complete : 0=0.0%, 4=88.8%, 8=10.0%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 01:30:10.789 issued rwts: total=2462,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:30:10.789 latency : target=0, window=0, percentile=100.00%, depth=16 01:30:10.789 filename0: (groupid=0, jobs=1): err= 0: pid=83388: Mon Dec 9 05:24:51 2024 01:30:10.789 read: IOPS=256, BW=1027KiB/s (1052kB/s)(10.0MiB/10005msec) 01:30:10.789 slat (usec): min=6, max=8035, avg=22.14, stdev=193.87 01:30:10.789 clat (msec): min=4, max=117, avg=62.18, stdev=17.14 01:30:10.789 lat (msec): min=4, max=117, avg=62.20, stdev=17.14 01:30:10.789 clat percentiles (msec): 01:30:10.789 | 1.00th=[ 13], 5.00th=[ 37], 10.00th=[ 44], 20.00th=[ 48], 01:30:10.789 | 30.00th=[ 50], 40.00th=[ 60], 50.00th=[ 62], 60.00th=[ 70], 01:30:10.789 | 70.00th=[ 71], 80.00th=[ 72], 90.00th=[ 84], 95.00th=[ 94], 01:30:10.789 | 99.00th=[ 108], 99.50th=[ 117], 99.90th=[ 118], 99.95th=[ 118], 01:30:10.789 | 99.99th=[ 118] 01:30:10.789 bw ( KiB/s): min= 856, max= 1128, per=4.11%, avg=1011.95, stdev=79.72, samples=19 01:30:10.789 iops : min= 214, max= 282, avg=252.95, stdev=19.93, samples=19 01:30:10.789 lat (msec) : 10=0.74%, 20=0.27%, 50=29.46%, 100=66.89%, 250=2.65% 01:30:10.789 cpu : usr=32.18%, sys=1.15%, ctx=906, majf=0, minf=9 01:30:10.789 IO depths : 1=0.1%, 2=1.6%, 4=6.5%, 8=76.9%, 16=14.9%, 32=0.0%, >=64=0.0% 01:30:10.789 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:30:10.789 complete : 0=0.0%, 4=88.7%, 8=9.9%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 01:30:10.789 issued rwts: total=2570,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:30:10.789 latency : target=0, window=0, percentile=100.00%, depth=16 01:30:10.789 filename1: (groupid=0, jobs=1): err= 0: pid=83389: Mon Dec 9 05:24:51 2024 01:30:10.789 read: IOPS=258, BW=1035KiB/s (1059kB/s)(10.1MiB/10022msec) 01:30:10.789 slat (usec): min=6, max=8041, avg=31.41, stdev=343.51 01:30:10.789 clat (msec): min=12, max=119, avg=61.70, stdev=15.81 01:30:10.789 lat (msec): min=12, max=119, avg=61.73, stdev=15.82 01:30:10.789 clat percentiles (msec): 01:30:10.789 | 1.00th=[ 27], 5.00th=[ 37], 10.00th=[ 44], 20.00th=[ 48], 01:30:10.789 | 30.00th=[ 49], 40.00th=[ 59], 50.00th=[ 62], 60.00th=[ 69], 01:30:10.789 | 70.00th=[ 71], 80.00th=[ 72], 90.00th=[ 81], 95.00th=[ 88], 01:30:10.789 | 99.00th=[ 107], 99.50th=[ 110], 99.90th=[ 116], 99.95th=[ 116], 01:30:10.789 | 99.99th=[ 120] 01:30:10.789 bw ( KiB/s): min= 896, max= 1176, per=4.19%, avg=1032.10, stdev=74.97, samples=20 01:30:10.789 iops : min= 224, max= 294, avg=258.00, stdev=18.69, samples=20 01:30:10.789 lat (msec) : 20=0.08%, 50=31.64%, 100=65.90%, 250=2.39% 01:30:10.789 cpu : usr=32.12%, sys=1.12%, ctx=902, majf=0, minf=9 01:30:10.789 IO depths : 1=0.1%, 2=0.7%, 4=2.6%, 8=80.8%, 16=15.8%, 32=0.0%, >=64=0.0% 01:30:10.789 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:30:10.789 complete : 0=0.0%, 4=87.8%, 8=11.6%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 01:30:10.789 issued rwts: total=2592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:30:10.789 latency : target=0, window=0, percentile=100.00%, depth=16 01:30:10.789 filename1: (groupid=0, jobs=1): err= 0: pid=83390: Mon Dec 9 05:24:51 2024 01:30:10.789 read: IOPS=259, BW=1037KiB/s (1062kB/s)(10.2MiB/10051msec) 01:30:10.789 slat (usec): min=6, max=3480, avg=16.64, stdev=68.55 01:30:10.789 clat (msec): min=5, max=135, avg=61.59, stdev=18.76 01:30:10.789 lat (msec): min=5, max=135, avg=61.61, stdev=18.76 01:30:10.789 clat percentiles (msec): 01:30:10.789 | 1.00th=[ 11], 5.00th=[ 30], 10.00th=[ 41], 20.00th=[ 47], 01:30:10.789 | 30.00th=[ 52], 40.00th=[ 60], 50.00th=[ 65], 60.00th=[ 68], 01:30:10.789 | 70.00th=[ 70], 80.00th=[ 74], 90.00th=[ 84], 95.00th=[ 94], 01:30:10.789 | 99.00th=[ 109], 99.50th=[ 111], 99.90th=[ 114], 99.95th=[ 129], 01:30:10.789 | 99.99th=[ 136] 01:30:10.789 bw ( KiB/s): min= 840, max= 2012, per=4.20%, avg=1035.50, stdev=239.30, samples=20 01:30:10.789 iops : min= 210, max= 503, avg=258.85, stdev=59.83, samples=20 01:30:10.789 lat (msec) : 10=0.69%, 20=3.45%, 50=24.15%, 100=68.37%, 250=3.34% 01:30:10.789 cpu : usr=43.51%, sys=1.25%, ctx=1507, majf=0, minf=9 01:30:10.789 IO depths : 1=0.1%, 2=0.5%, 4=1.7%, 8=81.3%, 16=16.5%, 32=0.0%, >=64=0.0% 01:30:10.789 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:30:10.789 complete : 0=0.0%, 4=88.0%, 8=11.6%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 01:30:10.789 issued rwts: total=2605,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:30:10.789 latency : target=0, window=0, percentile=100.00%, depth=16 01:30:10.789 filename1: (groupid=0, jobs=1): err= 0: pid=83391: Mon Dec 9 05:24:51 2024 01:30:10.789 read: IOPS=271, BW=1087KiB/s (1113kB/s)(10.7MiB/10079msec) 01:30:10.789 slat (usec): min=6, max=3034, avg=14.91, stdev=58.21 01:30:10.789 clat (usec): min=1270, max=141790, avg=58708.97, stdev=23831.25 01:30:10.789 lat (usec): min=1277, max=141803, avg=58723.88, stdev=23832.33 01:30:10.789 clat percentiles (usec): 01:30:10.789 | 1.00th=[ 1549], 5.00th=[ 1844], 10.00th=[ 18220], 20.00th=[ 44827], 01:30:10.789 | 30.00th=[ 50594], 40.00th=[ 60031], 50.00th=[ 63177], 60.00th=[ 68682], 01:30:10.789 | 70.00th=[ 70779], 80.00th=[ 72877], 90.00th=[ 83362], 95.00th=[ 92799], 01:30:10.789 | 99.00th=[107480], 99.50th=[109577], 99.90th=[117965], 99.95th=[129500], 01:30:10.789 | 99.99th=[141558] 01:30:10.789 bw ( KiB/s): min= 856, max= 3264, per=4.42%, avg=1089.20, stdev=518.13, samples=20 01:30:10.789 iops : min= 214, max= 816, avg=272.30, stdev=129.53, samples=20 01:30:10.789 lat (msec) : 2=5.18%, 4=1.24%, 10=0.66%, 20=3.43%, 50=19.39% 01:30:10.789 lat (msec) : 100=66.89%, 250=3.21% 01:30:10.789 cpu : usr=38.25%, sys=1.18%, ctx=1060, majf=0, minf=0 01:30:10.789 IO depths : 1=0.4%, 2=1.3%, 4=3.8%, 8=78.4%, 16=16.1%, 32=0.0%, >=64=0.0% 01:30:10.789 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:30:10.789 complete : 0=0.0%, 4=88.7%, 8=10.5%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 01:30:10.789 issued rwts: total=2739,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:30:10.789 latency : target=0, window=0, percentile=100.00%, depth=16 01:30:10.789 filename1: (groupid=0, jobs=1): err= 0: pid=83392: Mon Dec 9 05:24:51 2024 01:30:10.789 read: IOPS=262, BW=1048KiB/s (1073kB/s)(10.3MiB/10059msec) 01:30:10.789 slat (usec): min=6, max=8002, avg=19.04, stdev=174.18 01:30:10.789 clat (msec): min=9, max=138, avg=60.89, stdev=18.57 01:30:10.789 lat (msec): min=9, max=138, avg=60.91, stdev=18.57 01:30:10.789 clat percentiles (msec): 01:30:10.789 | 1.00th=[ 12], 5.00th=[ 31], 10.00th=[ 40], 20.00th=[ 47], 01:30:10.789 | 30.00th=[ 51], 40.00th=[ 59], 50.00th=[ 63], 60.00th=[ 67], 01:30:10.789 | 70.00th=[ 71], 80.00th=[ 72], 90.00th=[ 82], 95.00th=[ 93], 01:30:10.789 | 99.00th=[ 107], 99.50th=[ 116], 99.90th=[ 117], 99.95th=[ 134], 01:30:10.789 | 99.99th=[ 140] 01:30:10.789 bw ( KiB/s): min= 896, max= 1960, per=4.25%, avg=1047.70, stdev=223.49, samples=20 01:30:10.789 iops : min= 224, max= 490, avg=261.90, stdev=55.89, samples=20 01:30:10.789 lat (msec) : 10=0.61%, 20=3.49%, 50=25.76%, 100=67.30%, 250=2.85% 01:30:10.789 cpu : usr=39.26%, sys=1.39%, ctx=1161, majf=0, minf=9 01:30:10.789 IO depths : 1=0.1%, 2=0.4%, 4=1.3%, 8=81.8%, 16=16.4%, 32=0.0%, >=64=0.0% 01:30:10.789 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:30:10.789 complete : 0=0.0%, 4=87.8%, 8=11.9%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 01:30:10.789 issued rwts: total=2636,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:30:10.789 latency : target=0, window=0, percentile=100.00%, depth=16 01:30:10.789 filename1: (groupid=0, jobs=1): err= 0: pid=83393: Mon Dec 9 05:24:51 2024 01:30:10.789 read: IOPS=255, BW=1022KiB/s (1046kB/s)(10.0MiB/10033msec) 01:30:10.789 slat (usec): min=5, max=8009, avg=24.01, stdev=223.22 01:30:10.789 clat (msec): min=18, max=126, avg=62.44, stdev=17.38 01:30:10.789 lat (msec): min=18, max=126, avg=62.47, stdev=17.37 01:30:10.789 clat percentiles (msec): 01:30:10.789 | 1.00th=[ 24], 5.00th=[ 37], 10.00th=[ 42], 20.00th=[ 47], 01:30:10.789 | 30.00th=[ 51], 40.00th=[ 59], 50.00th=[ 64], 60.00th=[ 68], 01:30:10.789 | 70.00th=[ 71], 80.00th=[ 75], 90.00th=[ 84], 95.00th=[ 93], 01:30:10.789 | 99.00th=[ 111], 99.50th=[ 118], 99.90th=[ 118], 99.95th=[ 118], 01:30:10.789 | 99.99th=[ 127] 01:30:10.789 bw ( KiB/s): min= 752, max= 1296, per=4.14%, avg=1018.80, stdev=109.42, samples=20 01:30:10.790 iops : min= 188, max= 324, avg=254.70, stdev=27.36, samples=20 01:30:10.790 lat (msec) : 20=0.98%, 50=28.79%, 100=66.80%, 250=3.43% 01:30:10.790 cpu : usr=39.86%, sys=1.20%, ctx=1217, majf=0, minf=9 01:30:10.790 IO depths : 1=0.1%, 2=0.6%, 4=2.4%, 8=81.0%, 16=15.9%, 32=0.0%, >=64=0.0% 01:30:10.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:30:10.790 complete : 0=0.0%, 4=87.8%, 8=11.7%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 01:30:10.790 issued rwts: total=2563,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:30:10.790 latency : target=0, window=0, percentile=100.00%, depth=16 01:30:10.790 filename1: (groupid=0, jobs=1): err= 0: pid=83394: Mon Dec 9 05:24:51 2024 01:30:10.790 read: IOPS=261, BW=1044KiB/s (1069kB/s)(10.2MiB/10035msec) 01:30:10.790 slat (usec): min=6, max=8024, avg=28.91, stdev=266.02 01:30:10.790 clat (msec): min=24, max=115, avg=61.09, stdev=16.33 01:30:10.790 lat (msec): min=24, max=115, avg=61.12, stdev=16.33 01:30:10.790 clat percentiles (msec): 01:30:10.790 | 1.00th=[ 31], 5.00th=[ 38], 10.00th=[ 42], 20.00th=[ 46], 01:30:10.790 | 30.00th=[ 49], 40.00th=[ 57], 50.00th=[ 63], 60.00th=[ 67], 01:30:10.790 | 70.00th=[ 70], 80.00th=[ 72], 90.00th=[ 80], 95.00th=[ 90], 01:30:10.790 | 99.00th=[ 106], 99.50th=[ 110], 99.90th=[ 115], 99.95th=[ 115], 01:30:10.790 | 99.99th=[ 115] 01:30:10.790 bw ( KiB/s): min= 912, max= 1232, per=4.23%, avg=1041.50, stdev=76.86, samples=20 01:30:10.790 iops : min= 228, max= 308, avg=260.35, stdev=19.24, samples=20 01:30:10.790 lat (msec) : 50=32.10%, 100=65.34%, 250=2.56% 01:30:10.790 cpu : usr=44.27%, sys=1.37%, ctx=1383, majf=0, minf=9 01:30:10.790 IO depths : 1=0.1%, 2=0.4%, 4=1.6%, 8=82.0%, 16=15.9%, 32=0.0%, >=64=0.0% 01:30:10.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:30:10.790 complete : 0=0.0%, 4=87.4%, 8=12.2%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 01:30:10.790 issued rwts: total=2620,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:30:10.790 latency : target=0, window=0, percentile=100.00%, depth=16 01:30:10.790 filename1: (groupid=0, jobs=1): err= 0: pid=83395: Mon Dec 9 05:24:51 2024 01:30:10.790 read: IOPS=248, BW=996KiB/s (1020kB/s)(9.77MiB/10048msec) 01:30:10.790 slat (usec): min=6, max=7036, avg=18.86, stdev=140.64 01:30:10.790 clat (msec): min=16, max=131, avg=64.11, stdev=17.08 01:30:10.790 lat (msec): min=16, max=131, avg=64.13, stdev=17.08 01:30:10.790 clat percentiles (msec): 01:30:10.790 | 1.00th=[ 20], 5.00th=[ 36], 10.00th=[ 46], 20.00th=[ 48], 01:30:10.790 | 30.00th=[ 59], 40.00th=[ 61], 50.00th=[ 67], 60.00th=[ 71], 01:30:10.790 | 70.00th=[ 72], 80.00th=[ 73], 90.00th=[ 84], 95.00th=[ 94], 01:30:10.790 | 99.00th=[ 107], 99.50th=[ 109], 99.90th=[ 122], 99.95th=[ 129], 01:30:10.790 | 99.99th=[ 132] 01:30:10.790 bw ( KiB/s): min= 784, max= 1520, per=4.03%, avg=994.00, stdev=145.79, samples=20 01:30:10.790 iops : min= 196, max= 380, avg=248.50, stdev=36.45, samples=20 01:30:10.790 lat (msec) : 20=1.20%, 50=24.63%, 100=70.89%, 250=3.28% 01:30:10.790 cpu : usr=32.63%, sys=0.92%, ctx=907, majf=0, minf=9 01:30:10.790 IO depths : 1=0.1%, 2=0.9%, 4=3.5%, 8=79.4%, 16=16.2%, 32=0.0%, >=64=0.0% 01:30:10.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:30:10.790 complete : 0=0.0%, 4=88.5%, 8=10.7%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 01:30:10.790 issued rwts: total=2501,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:30:10.790 latency : target=0, window=0, percentile=100.00%, depth=16 01:30:10.790 filename1: (groupid=0, jobs=1): err= 0: pid=83396: Mon Dec 9 05:24:51 2024 01:30:10.790 read: IOPS=258, BW=1036KiB/s (1060kB/s)(10.1MiB/10027msec) 01:30:10.790 slat (usec): min=6, max=8028, avg=30.74, stdev=309.82 01:30:10.790 clat (msec): min=25, max=116, avg=61.65, stdev=15.94 01:30:10.790 lat (msec): min=25, max=116, avg=61.68, stdev=15.95 01:30:10.790 clat percentiles (msec): 01:30:10.790 | 1.00th=[ 32], 5.00th=[ 39], 10.00th=[ 42], 20.00th=[ 47], 01:30:10.790 | 30.00th=[ 51], 40.00th=[ 58], 50.00th=[ 63], 60.00th=[ 67], 01:30:10.790 | 70.00th=[ 70], 80.00th=[ 72], 90.00th=[ 82], 95.00th=[ 90], 01:30:10.790 | 99.00th=[ 108], 99.50th=[ 109], 99.90th=[ 116], 99.95th=[ 116], 01:30:10.790 | 99.99th=[ 116] 01:30:10.790 bw ( KiB/s): min= 888, max= 1206, per=4.19%, avg=1031.50, stdev=85.50, samples=20 01:30:10.790 iops : min= 222, max= 301, avg=257.85, stdev=21.32, samples=20 01:30:10.790 lat (msec) : 50=29.47%, 100=67.91%, 250=2.62% 01:30:10.790 cpu : usr=41.22%, sys=1.19%, ctx=1281, majf=0, minf=9 01:30:10.790 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=82.7%, 16=16.2%, 32=0.0%, >=64=0.0% 01:30:10.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:30:10.790 complete : 0=0.0%, 4=87.4%, 8=12.5%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 01:30:10.790 issued rwts: total=2596,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:30:10.790 latency : target=0, window=0, percentile=100.00%, depth=16 01:30:10.790 filename2: (groupid=0, jobs=1): err= 0: pid=83397: Mon Dec 9 05:24:51 2024 01:30:10.790 read: IOPS=257, BW=1031KiB/s (1056kB/s)(10.1MiB/10026msec) 01:30:10.790 slat (usec): min=3, max=8024, avg=21.73, stdev=186.22 01:30:10.790 clat (msec): min=24, max=128, avg=61.93, stdev=15.58 01:30:10.790 lat (msec): min=24, max=128, avg=61.95, stdev=15.58 01:30:10.790 clat percentiles (msec): 01:30:10.790 | 1.00th=[ 31], 5.00th=[ 39], 10.00th=[ 44], 20.00th=[ 48], 01:30:10.790 | 30.00th=[ 51], 40.00th=[ 59], 50.00th=[ 63], 60.00th=[ 67], 01:30:10.790 | 70.00th=[ 70], 80.00th=[ 72], 90.00th=[ 82], 95.00th=[ 88], 01:30:10.790 | 99.00th=[ 107], 99.50th=[ 112], 99.90th=[ 116], 99.95th=[ 116], 01:30:10.790 | 99.99th=[ 129] 01:30:10.790 bw ( KiB/s): min= 920, max= 1264, per=4.18%, avg=1029.15, stdev=76.05, samples=20 01:30:10.790 iops : min= 230, max= 316, avg=257.25, stdev=19.03, samples=20 01:30:10.790 lat (msec) : 50=29.61%, 100=68.27%, 250=2.13% 01:30:10.790 cpu : usr=38.11%, sys=1.31%, ctx=1117, majf=0, minf=9 01:30:10.790 IO depths : 1=0.1%, 2=0.6%, 4=2.1%, 8=81.2%, 16=15.9%, 32=0.0%, >=64=0.0% 01:30:10.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:30:10.790 complete : 0=0.0%, 4=87.7%, 8=11.8%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 01:30:10.790 issued rwts: total=2584,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:30:10.790 latency : target=0, window=0, percentile=100.00%, depth=16 01:30:10.790 filename2: (groupid=0, jobs=1): err= 0: pid=83398: Mon Dec 9 05:24:51 2024 01:30:10.790 read: IOPS=260, BW=1042KiB/s (1067kB/s)(10.2MiB/10020msec) 01:30:10.790 slat (usec): min=6, max=8072, avg=27.85, stdev=293.65 01:30:10.790 clat (msec): min=19, max=119, avg=61.26, stdev=16.68 01:30:10.790 lat (msec): min=19, max=119, avg=61.28, stdev=16.68 01:30:10.790 clat percentiles (msec): 01:30:10.790 | 1.00th=[ 29], 5.00th=[ 36], 10.00th=[ 40], 20.00th=[ 47], 01:30:10.790 | 30.00th=[ 49], 40.00th=[ 58], 50.00th=[ 61], 60.00th=[ 67], 01:30:10.790 | 70.00th=[ 71], 80.00th=[ 72], 90.00th=[ 82], 95.00th=[ 94], 01:30:10.790 | 99.00th=[ 107], 99.50th=[ 108], 99.90th=[ 118], 99.95th=[ 118], 01:30:10.790 | 99.99th=[ 121] 01:30:10.790 bw ( KiB/s): min= 896, max= 1160, per=4.22%, avg=1039.75, stdev=70.20, samples=20 01:30:10.790 iops : min= 224, max= 290, avg=259.90, stdev=17.55, samples=20 01:30:10.790 lat (msec) : 20=0.11%, 50=32.53%, 100=64.83%, 250=2.53% 01:30:10.790 cpu : usr=35.17%, sys=0.93%, ctx=1054, majf=0, minf=9 01:30:10.790 IO depths : 1=0.1%, 2=0.4%, 4=1.6%, 8=82.1%, 16=15.8%, 32=0.0%, >=64=0.0% 01:30:10.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:30:10.790 complete : 0=0.0%, 4=87.4%, 8=12.3%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 01:30:10.790 issued rwts: total=2610,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:30:10.790 latency : target=0, window=0, percentile=100.00%, depth=16 01:30:10.790 filename2: (groupid=0, jobs=1): err= 0: pid=83399: Mon Dec 9 05:24:51 2024 01:30:10.790 read: IOPS=242, BW=971KiB/s (994kB/s)(9764KiB/10056msec) 01:30:10.790 slat (usec): min=4, max=5037, avg=20.74, stdev=173.90 01:30:10.790 clat (msec): min=5, max=115, avg=65.66, stdev=19.20 01:30:10.790 lat (msec): min=5, max=115, avg=65.68, stdev=19.20 01:30:10.790 clat percentiles (msec): 01:30:10.790 | 1.00th=[ 11], 5.00th=[ 27], 10.00th=[ 44], 20.00th=[ 53], 01:30:10.790 | 30.00th=[ 61], 40.00th=[ 64], 50.00th=[ 67], 60.00th=[ 71], 01:30:10.790 | 70.00th=[ 73], 80.00th=[ 80], 90.00th=[ 88], 95.00th=[ 96], 01:30:10.790 | 99.00th=[ 111], 99.50th=[ 115], 99.90th=[ 115], 99.95th=[ 115], 01:30:10.790 | 99.99th=[ 115] 01:30:10.790 bw ( KiB/s): min= 768, max= 1920, per=3.94%, avg=969.70, stdev=242.44, samples=20 01:30:10.790 iops : min= 192, max= 480, avg=242.40, stdev=60.62, samples=20 01:30:10.790 lat (msec) : 10=0.74%, 20=3.69%, 50=14.30%, 100=77.26%, 250=4.01% 01:30:10.790 cpu : usr=46.58%, sys=1.62%, ctx=1921, majf=0, minf=9 01:30:10.790 IO depths : 1=0.1%, 2=3.6%, 4=14.5%, 8=67.8%, 16=14.0%, 32=0.0%, >=64=0.0% 01:30:10.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:30:10.790 complete : 0=0.0%, 4=91.3%, 8=5.5%, 16=3.2%, 32=0.0%, 64=0.0%, >=64=0.0% 01:30:10.790 issued rwts: total=2441,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:30:10.790 latency : target=0, window=0, percentile=100.00%, depth=16 01:30:10.790 filename2: (groupid=0, jobs=1): err= 0: pid=83400: Mon Dec 9 05:24:51 2024 01:30:10.790 read: IOPS=258, BW=1032KiB/s (1057kB/s)(10.1MiB/10031msec) 01:30:10.790 slat (usec): min=6, max=8058, avg=18.36, stdev=158.33 01:30:10.790 clat (msec): min=18, max=117, avg=61.86, stdev=16.61 01:30:10.790 lat (msec): min=18, max=117, avg=61.87, stdev=16.60 01:30:10.790 clat percentiles (msec): 01:30:10.790 | 1.00th=[ 26], 5.00th=[ 36], 10.00th=[ 42], 20.00th=[ 47], 01:30:10.790 | 30.00th=[ 51], 40.00th=[ 59], 50.00th=[ 64], 60.00th=[ 68], 01:30:10.790 | 70.00th=[ 70], 80.00th=[ 73], 90.00th=[ 82], 95.00th=[ 91], 01:30:10.790 | 99.00th=[ 107], 99.50th=[ 110], 99.90th=[ 118], 99.95th=[ 118], 01:30:10.790 | 99.99th=[ 118] 01:30:10.790 bw ( KiB/s): min= 880, max= 1258, per=4.18%, avg=1030.70, stdev=98.27, samples=20 01:30:10.790 iops : min= 220, max= 314, avg=257.65, stdev=24.51, samples=20 01:30:10.790 lat (msec) : 20=0.62%, 50=29.47%, 100=67.40%, 250=2.51% 01:30:10.790 cpu : usr=40.03%, sys=1.12%, ctx=1294, majf=0, minf=9 01:30:10.790 IO depths : 1=0.1%, 2=0.4%, 4=1.5%, 8=81.9%, 16=16.1%, 32=0.0%, >=64=0.0% 01:30:10.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:30:10.790 complete : 0=0.0%, 4=87.6%, 8=12.1%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 01:30:10.790 issued rwts: total=2589,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:30:10.790 latency : target=0, window=0, percentile=100.00%, depth=16 01:30:10.790 filename2: (groupid=0, jobs=1): err= 0: pid=83401: Mon Dec 9 05:24:51 2024 01:30:10.790 read: IOPS=267, BW=1070KiB/s (1096kB/s)(10.5MiB/10006msec) 01:30:10.790 slat (usec): min=3, max=8045, avg=25.74, stdev=245.03 01:30:10.790 clat (msec): min=6, max=118, avg=59.69, stdev=16.76 01:30:10.790 lat (msec): min=6, max=118, avg=59.72, stdev=16.76 01:30:10.790 clat percentiles (msec): 01:30:10.790 | 1.00th=[ 27], 5.00th=[ 37], 10.00th=[ 40], 20.00th=[ 47], 01:30:10.790 | 30.00th=[ 48], 40.00th=[ 54], 50.00th=[ 61], 60.00th=[ 64], 01:30:10.790 | 70.00th=[ 70], 80.00th=[ 72], 90.00th=[ 81], 95.00th=[ 89], 01:30:10.790 | 99.00th=[ 108], 99.50th=[ 114], 99.90th=[ 118], 99.95th=[ 118], 01:30:10.790 | 99.99th=[ 118] 01:30:10.790 bw ( KiB/s): min= 888, max= 1224, per=4.31%, avg=1061.37, stdev=89.69, samples=19 01:30:10.790 iops : min= 222, max= 306, avg=265.32, stdev=22.39, samples=19 01:30:10.790 lat (msec) : 10=0.34%, 20=0.34%, 50=34.78%, 100=62.31%, 250=2.24% 01:30:10.790 cpu : usr=38.64%, sys=1.47%, ctx=1210, majf=0, minf=9 01:30:10.790 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=83.4%, 16=15.8%, 32=0.0%, >=64=0.0% 01:30:10.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:30:10.790 complete : 0=0.0%, 4=86.9%, 8=13.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 01:30:10.790 issued rwts: total=2677,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:30:10.790 latency : target=0, window=0, percentile=100.00%, depth=16 01:30:10.790 filename2: (groupid=0, jobs=1): err= 0: pid=83402: Mon Dec 9 05:24:51 2024 01:30:10.790 read: IOPS=262, BW=1050KiB/s (1075kB/s)(10.3MiB/10051msec) 01:30:10.790 slat (usec): min=6, max=4020, avg=18.71, stdev=135.27 01:30:10.790 clat (msec): min=8, max=137, avg=60.77, stdev=18.60 01:30:10.790 lat (msec): min=8, max=137, avg=60.79, stdev=18.61 01:30:10.790 clat percentiles (msec): 01:30:10.790 | 1.00th=[ 11], 5.00th=[ 32], 10.00th=[ 41], 20.00th=[ 46], 01:30:10.790 | 30.00th=[ 50], 40.00th=[ 59], 50.00th=[ 64], 60.00th=[ 67], 01:30:10.791 | 70.00th=[ 70], 80.00th=[ 73], 90.00th=[ 81], 95.00th=[ 91], 01:30:10.791 | 99.00th=[ 108], 99.50th=[ 110], 99.90th=[ 127], 99.95th=[ 129], 01:30:10.791 | 99.99th=[ 138] 01:30:10.791 bw ( KiB/s): min= 840, max= 2012, per=4.26%, avg=1049.05, stdev=241.91, samples=20 01:30:10.791 iops : min= 210, max= 503, avg=262.25, stdev=60.48, samples=20 01:30:10.791 lat (msec) : 10=0.61%, 20=3.30%, 50=26.83%, 100=66.20%, 250=3.07% 01:30:10.791 cpu : usr=44.26%, sys=1.54%, ctx=1380, majf=0, minf=9 01:30:10.791 IO depths : 1=0.1%, 2=0.3%, 4=1.3%, 8=81.9%, 16=16.4%, 32=0.0%, >=64=0.0% 01:30:10.791 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:30:10.791 complete : 0=0.0%, 4=87.7%, 8=12.0%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 01:30:10.791 issued rwts: total=2639,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:30:10.791 latency : target=0, window=0, percentile=100.00%, depth=16 01:30:10.791 filename2: (groupid=0, jobs=1): err= 0: pid=83403: Mon Dec 9 05:24:51 2024 01:30:10.791 read: IOPS=259, BW=1039KiB/s (1064kB/s)(10.2MiB/10022msec) 01:30:10.791 slat (usec): min=3, max=8042, avg=37.31, stdev=407.90 01:30:10.791 clat (msec): min=23, max=119, avg=61.39, stdev=16.15 01:30:10.791 lat (msec): min=24, max=119, avg=61.43, stdev=16.14 01:30:10.791 clat percentiles (msec): 01:30:10.791 | 1.00th=[ 33], 5.00th=[ 37], 10.00th=[ 41], 20.00th=[ 47], 01:30:10.791 | 30.00th=[ 48], 40.00th=[ 59], 50.00th=[ 61], 60.00th=[ 66], 01:30:10.791 | 70.00th=[ 71], 80.00th=[ 72], 90.00th=[ 82], 95.00th=[ 91], 01:30:10.791 | 99.00th=[ 107], 99.50th=[ 108], 99.90th=[ 121], 99.95th=[ 121], 01:30:10.791 | 99.99th=[ 121] 01:30:10.791 bw ( KiB/s): min= 913, max= 1130, per=4.20%, avg=1035.45, stdev=61.15, samples=20 01:30:10.791 iops : min= 228, max= 282, avg=258.80, stdev=15.28, samples=20 01:30:10.791 lat (msec) : 50=33.68%, 100=64.09%, 250=2.23% 01:30:10.791 cpu : usr=32.39%, sys=0.83%, ctx=914, majf=0, minf=9 01:30:10.791 IO depths : 1=0.1%, 2=0.7%, 4=2.9%, 8=80.9%, 16=15.5%, 32=0.0%, >=64=0.0% 01:30:10.791 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:30:10.791 complete : 0=0.0%, 4=87.6%, 8=11.8%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 01:30:10.791 issued rwts: total=2604,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:30:10.791 latency : target=0, window=0, percentile=100.00%, depth=16 01:30:10.791 filename2: (groupid=0, jobs=1): err= 0: pid=83404: Mon Dec 9 05:24:51 2024 01:30:10.791 read: IOPS=259, BW=1039KiB/s (1064kB/s)(10.2MiB/10036msec) 01:30:10.791 slat (usec): min=6, max=8020, avg=19.52, stdev=157.03 01:30:10.791 clat (msec): min=24, max=117, avg=61.43, stdev=16.54 01:30:10.791 lat (msec): min=24, max=117, avg=61.45, stdev=16.54 01:30:10.791 clat percentiles (msec): 01:30:10.791 | 1.00th=[ 35], 5.00th=[ 36], 10.00th=[ 40], 20.00th=[ 48], 01:30:10.791 | 30.00th=[ 48], 40.00th=[ 59], 50.00th=[ 61], 60.00th=[ 66], 01:30:10.791 | 70.00th=[ 71], 80.00th=[ 72], 90.00th=[ 84], 95.00th=[ 94], 01:30:10.791 | 99.00th=[ 108], 99.50th=[ 111], 99.90th=[ 118], 99.95th=[ 118], 01:30:10.791 | 99.99th=[ 118] 01:30:10.791 bw ( KiB/s): min= 912, max= 1168, per=4.21%, avg=1036.45, stdev=75.09, samples=20 01:30:10.791 iops : min= 228, max= 292, avg=259.10, stdev=18.79, samples=20 01:30:10.791 lat (msec) : 50=33.37%, 100=63.60%, 250=3.03% 01:30:10.791 cpu : usr=32.17%, sys=1.06%, ctx=914, majf=0, minf=9 01:30:10.791 IO depths : 1=0.1%, 2=0.8%, 4=2.9%, 8=80.7%, 16=15.5%, 32=0.0%, >=64=0.0% 01:30:10.791 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:30:10.791 complete : 0=0.0%, 4=87.6%, 8=11.7%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 01:30:10.791 issued rwts: total=2607,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:30:10.791 latency : target=0, window=0, percentile=100.00%, depth=16 01:30:10.791 01:30:10.791 Run status group 0 (all jobs): 01:30:10.791 READ: bw=24.0MiB/s (25.2MB/s), 950KiB/s-1087KiB/s (973kB/s-1113kB/s), io=242MiB (254MB), run=10005-10079msec 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:30:10.791 bdev_null0 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:30:10.791 [2024-12-09 05:24:51.548300] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:30:10.791 bdev_null1 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:30:10.791 05:24:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:30:10.791 { 01:30:10.791 "params": { 01:30:10.791 "name": "Nvme$subsystem", 01:30:10.791 "trtype": "$TEST_TRANSPORT", 01:30:10.791 "traddr": "$NVMF_FIRST_TARGET_IP", 01:30:10.791 "adrfam": "ipv4", 01:30:10.792 "trsvcid": "$NVMF_PORT", 01:30:10.792 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:30:10.792 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:30:10.792 "hdgst": ${hdgst:-false}, 01:30:10.792 "ddgst": ${ddgst:-false} 01:30:10.792 }, 01:30:10.792 "method": "bdev_nvme_attach_controller" 01:30:10.792 } 01:30:10.792 EOF 01:30:10.792 )") 01:30:10.792 05:24:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 01:30:10.792 05:24:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:30:10.792 05:24:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 01:30:10.792 05:24:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:30:10.792 05:24:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 01:30:10.792 05:24:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:30:10.792 05:24:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 01:30:10.792 05:24:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:30:10.792 05:24:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 01:30:10.792 05:24:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 01:30:10.792 05:24:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:30:10.792 05:24:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 01:30:10.792 05:24:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 01:30:10.792 05:24:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:30:10.792 05:24:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 01:30:10.792 05:24:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 01:30:10.792 05:24:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 01:30:10.792 05:24:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:30:10.792 05:24:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:30:10.792 05:24:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:30:10.792 { 01:30:10.792 "params": { 01:30:10.792 "name": "Nvme$subsystem", 01:30:10.792 "trtype": "$TEST_TRANSPORT", 01:30:10.792 "traddr": "$NVMF_FIRST_TARGET_IP", 01:30:10.792 "adrfam": "ipv4", 01:30:10.792 "trsvcid": "$NVMF_PORT", 01:30:10.792 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:30:10.792 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:30:10.792 "hdgst": ${hdgst:-false}, 01:30:10.792 "ddgst": ${ddgst:-false} 01:30:10.792 }, 01:30:10.792 "method": "bdev_nvme_attach_controller" 01:30:10.792 } 01:30:10.792 EOF 01:30:10.792 )") 01:30:10.792 05:24:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 01:30:10.792 05:24:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 01:30:10.792 05:24:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 01:30:10.792 05:24:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 01:30:10.792 05:24:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 01:30:10.792 05:24:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:30:10.792 "params": { 01:30:10.792 "name": "Nvme0", 01:30:10.792 "trtype": "tcp", 01:30:10.792 "traddr": "10.0.0.3", 01:30:10.792 "adrfam": "ipv4", 01:30:10.792 "trsvcid": "4420", 01:30:10.792 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:30:10.792 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:30:10.792 "hdgst": false, 01:30:10.792 "ddgst": false 01:30:10.792 }, 01:30:10.792 "method": "bdev_nvme_attach_controller" 01:30:10.792 },{ 01:30:10.792 "params": { 01:30:10.792 "name": "Nvme1", 01:30:10.792 "trtype": "tcp", 01:30:10.792 "traddr": "10.0.0.3", 01:30:10.792 "adrfam": "ipv4", 01:30:10.792 "trsvcid": "4420", 01:30:10.792 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:30:10.792 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:30:10.792 "hdgst": false, 01:30:10.792 "ddgst": false 01:30:10.792 }, 01:30:10.792 "method": "bdev_nvme_attach_controller" 01:30:10.792 }' 01:30:10.792 05:24:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 01:30:10.792 05:24:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:30:10.792 05:24:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:30:10.792 05:24:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:30:10.792 05:24:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:30:10.792 05:24:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 01:30:10.792 05:24:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 01:30:10.792 05:24:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:30:10.792 05:24:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 01:30:10.792 05:24:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:30:10.792 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 01:30:10.792 ... 01:30:10.792 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 01:30:10.792 ... 01:30:10.792 fio-3.35 01:30:10.792 Starting 4 threads 01:30:14.979 01:30:14.979 filename0: (groupid=0, jobs=1): err= 0: pid=83544: Mon Dec 9 05:24:57 2024 01:30:14.979 read: IOPS=2229, BW=17.4MiB/s (18.3MB/s)(87.1MiB/5002msec) 01:30:14.979 slat (nsec): min=5718, max=65176, avg=8307.25, stdev=2746.94 01:30:14.979 clat (usec): min=1269, max=6646, avg=3564.71, stdev=713.28 01:30:14.979 lat (usec): min=1289, max=6658, avg=3573.02, stdev=712.57 01:30:14.979 clat percentiles (usec): 01:30:14.979 | 1.00th=[ 2835], 5.00th=[ 2966], 10.00th=[ 3032], 20.00th=[ 3064], 01:30:14.979 | 30.00th=[ 3130], 40.00th=[ 3163], 50.00th=[ 3195], 60.00th=[ 3294], 01:30:14.979 | 70.00th=[ 3490], 80.00th=[ 4555], 90.00th=[ 4817], 95.00th=[ 4948], 01:30:14.979 | 99.00th=[ 5145], 99.50th=[ 5276], 99.90th=[ 5473], 99.95th=[ 5604], 01:30:14.979 | 99.99th=[ 6259] 01:30:14.979 bw ( KiB/s): min=17408, max=18176, per=25.06%, avg=17872.00, stdev=243.31, samples=9 01:30:14.979 iops : min= 2176, max= 2272, avg=2234.00, stdev=30.41, samples=9 01:30:14.979 lat (msec) : 2=0.46%, 4=74.62%, 10=24.92% 01:30:14.979 cpu : usr=93.10%, sys=6.20%, ctx=8, majf=0, minf=0 01:30:14.979 IO depths : 1=0.1%, 2=0.4%, 4=71.4%, 8=28.1%, 16=0.0%, 32=0.0%, >=64=0.0% 01:30:14.979 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:30:14.979 complete : 0=0.0%, 4=99.8%, 8=0.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:30:14.979 issued rwts: total=11152,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:30:14.979 latency : target=0, window=0, percentile=100.00%, depth=8 01:30:14.979 filename0: (groupid=0, jobs=1): err= 0: pid=83545: Mon Dec 9 05:24:57 2024 01:30:14.979 read: IOPS=2228, BW=17.4MiB/s (18.3MB/s)(87.1MiB/5001msec) 01:30:14.979 slat (nsec): min=6400, max=48229, avg=12590.59, stdev=2965.44 01:30:14.979 clat (usec): min=847, max=6670, avg=3558.55, stdev=706.60 01:30:14.979 lat (usec): min=860, max=6683, avg=3571.14, stdev=707.06 01:30:14.979 clat percentiles (usec): 01:30:14.979 | 1.00th=[ 2868], 5.00th=[ 2966], 10.00th=[ 2999], 20.00th=[ 3064], 01:30:14.979 | 30.00th=[ 3097], 40.00th=[ 3163], 50.00th=[ 3195], 60.00th=[ 3294], 01:30:14.979 | 70.00th=[ 3490], 80.00th=[ 4490], 90.00th=[ 4817], 95.00th=[ 4883], 01:30:14.979 | 99.00th=[ 5145], 99.50th=[ 5276], 99.90th=[ 5669], 99.95th=[ 5800], 01:30:14.979 | 99.99th=[ 6259] 01:30:14.979 bw ( KiB/s): min=17408, max=18160, per=25.01%, avg=17834.67, stdev=219.24, samples=9 01:30:14.979 iops : min= 2176, max= 2270, avg=2229.33, stdev=27.40, samples=9 01:30:14.979 lat (usec) : 1000=0.01% 01:30:14.979 lat (msec) : 2=0.31%, 4=74.82%, 10=24.86% 01:30:14.979 cpu : usr=93.74%, sys=5.56%, ctx=49, majf=0, minf=0 01:30:14.979 IO depths : 1=0.1%, 2=0.3%, 4=71.5%, 8=28.1%, 16=0.0%, 32=0.0%, >=64=0.0% 01:30:14.979 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:30:14.979 complete : 0=0.0%, 4=99.9%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:30:14.979 issued rwts: total=11147,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:30:14.979 latency : target=0, window=0, percentile=100.00%, depth=8 01:30:14.979 filename1: (groupid=0, jobs=1): err= 0: pid=83546: Mon Dec 9 05:24:57 2024 01:30:14.979 read: IOPS=2229, BW=17.4MiB/s (18.3MB/s)(87.1MiB/5003msec) 01:30:14.979 slat (nsec): min=5688, max=64470, avg=9141.08, stdev=3199.19 01:30:14.979 clat (usec): min=1360, max=6654, avg=3564.45, stdev=704.79 01:30:14.979 lat (usec): min=1367, max=6667, avg=3573.59, stdev=704.80 01:30:14.979 clat percentiles (usec): 01:30:14.979 | 1.00th=[ 2835], 5.00th=[ 2966], 10.00th=[ 3032], 20.00th=[ 3064], 01:30:14.979 | 30.00th=[ 3130], 40.00th=[ 3163], 50.00th=[ 3195], 60.00th=[ 3294], 01:30:14.979 | 70.00th=[ 3490], 80.00th=[ 4555], 90.00th=[ 4817], 95.00th=[ 4948], 01:30:14.979 | 99.00th=[ 5145], 99.50th=[ 5211], 99.90th=[ 5473], 99.95th=[ 5604], 01:30:14.979 | 99.99th=[ 6259] 01:30:14.979 bw ( KiB/s): min=17408, max=18112, per=25.04%, avg=17857.78, stdev=219.01, samples=9 01:30:14.979 iops : min= 2176, max= 2264, avg=2232.22, stdev=27.38, samples=9 01:30:14.979 lat (msec) : 2=0.21%, 4=75.02%, 10=24.78% 01:30:14.979 cpu : usr=94.12%, sys=5.20%, ctx=5, majf=0, minf=0 01:30:14.979 IO depths : 1=0.1%, 2=0.4%, 4=71.5%, 8=28.1%, 16=0.0%, 32=0.0%, >=64=0.0% 01:30:14.979 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:30:14.979 complete : 0=0.0%, 4=99.9%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:30:14.979 issued rwts: total=11152,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:30:14.979 latency : target=0, window=0, percentile=100.00%, depth=8 01:30:14.979 filename1: (groupid=0, jobs=1): err= 0: pid=83547: Mon Dec 9 05:24:57 2024 01:30:14.979 read: IOPS=2229, BW=17.4MiB/s (18.3MB/s)(87.1MiB/5001msec) 01:30:14.979 slat (nsec): min=5897, max=51548, avg=12376.17, stdev=3041.32 01:30:14.979 clat (usec): min=681, max=6680, avg=3556.47, stdev=707.72 01:30:14.979 lat (usec): min=688, max=6692, avg=3568.84, stdev=706.86 01:30:14.979 clat percentiles (usec): 01:30:14.979 | 1.00th=[ 2835], 5.00th=[ 2966], 10.00th=[ 2999], 20.00th=[ 3064], 01:30:14.979 | 30.00th=[ 3097], 40.00th=[ 3163], 50.00th=[ 3195], 60.00th=[ 3294], 01:30:14.979 | 70.00th=[ 3490], 80.00th=[ 4490], 90.00th=[ 4817], 95.00th=[ 4883], 01:30:14.979 | 99.00th=[ 5145], 99.50th=[ 5211], 99.90th=[ 5473], 99.95th=[ 5538], 01:30:14.979 | 99.99th=[ 6194] 01:30:14.979 bw ( KiB/s): min=17408, max=18160, per=25.01%, avg=17838.56, stdev=218.69, samples=9 01:30:14.979 iops : min= 2176, max= 2270, avg=2229.78, stdev=27.34, samples=9 01:30:14.979 lat (usec) : 750=0.03%, 1000=0.01% 01:30:14.979 lat (msec) : 2=0.31%, 4=74.80%, 10=24.85% 01:30:14.979 cpu : usr=93.76%, sys=5.54%, ctx=10, majf=0, minf=1 01:30:14.979 IO depths : 1=0.1%, 2=0.3%, 4=71.5%, 8=28.1%, 16=0.0%, 32=0.0%, >=64=0.0% 01:30:14.979 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:30:14.979 complete : 0=0.0%, 4=99.9%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:30:14.980 issued rwts: total=11150,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:30:14.980 latency : target=0, window=0, percentile=100.00%, depth=8 01:30:14.980 01:30:14.980 Run status group 0 (all jobs): 01:30:14.980 READ: bw=69.6MiB/s (73.0MB/s), 17.4MiB/s-17.4MiB/s (18.3MB/s-18.3MB/s), io=348MiB (365MB), run=5001-5003msec 01:30:15.237 05:24:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 01:30:15.237 05:24:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 01:30:15.237 05:24:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 01:30:15.237 05:24:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 01:30:15.237 05:24:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 01:30:15.237 05:24:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:30:15.237 05:24:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:15.237 05:24:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:30:15.237 05:24:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:15.237 05:24:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 01:30:15.237 05:24:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:15.237 05:24:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:30:15.237 05:24:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:15.237 05:24:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 01:30:15.237 05:24:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 01:30:15.237 05:24:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 01:30:15.237 05:24:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:30:15.237 05:24:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:15.237 05:24:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:30:15.237 05:24:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:15.237 05:24:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 01:30:15.237 05:24:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:15.238 05:24:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:30:15.496 05:24:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:15.496 01:30:15.496 real 0m23.778s 01:30:15.496 user 2m6.724s 01:30:15.496 sys 0m5.893s 01:30:15.496 05:24:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 01:30:15.496 05:24:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:30:15.496 ************************************ 01:30:15.496 END TEST fio_dif_rand_params 01:30:15.496 ************************************ 01:30:15.496 05:24:57 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 01:30:15.496 05:24:57 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:30:15.496 05:24:57 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 01:30:15.496 05:24:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:30:15.496 ************************************ 01:30:15.496 START TEST fio_dif_digest 01:30:15.496 ************************************ 01:30:15.496 05:24:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 01:30:15.496 05:24:57 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 01:30:15.496 05:24:57 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 01:30:15.496 05:24:57 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 01:30:15.496 05:24:57 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 01:30:15.496 05:24:57 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 01:30:15.496 05:24:57 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 01:30:15.496 05:24:57 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 01:30:15.496 05:24:57 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 01:30:15.496 05:24:57 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 01:30:15.496 05:24:57 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 01:30:15.496 05:24:57 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 01:30:15.496 05:24:57 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 01:30:15.496 05:24:57 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 01:30:15.496 05:24:57 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 01:30:15.496 05:24:57 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 01:30:15.496 05:24:57 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 01:30:15.496 05:24:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:15.496 05:24:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 01:30:15.496 bdev_null0 01:30:15.496 05:24:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:15.496 05:24:57 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 01:30:15.496 05:24:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:15.496 05:24:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 01:30:15.496 05:24:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:15.496 05:24:57 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 01:30:15.496 05:24:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:15.496 05:24:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 01:30:15.496 05:24:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:15.496 05:24:57 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 01:30:15.496 05:24:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:15.496 05:24:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 01:30:15.496 [2024-12-09 05:24:57.812847] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:30:15.496 05:24:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:15.496 05:24:57 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 01:30:15.496 05:24:57 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 01:30:15.496 05:24:57 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 01:30:15.496 05:24:57 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 01:30:15.496 05:24:57 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 01:30:15.496 05:24:57 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:30:15.496 05:24:57 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:30:15.496 05:24:57 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:30:15.496 { 01:30:15.496 "params": { 01:30:15.496 "name": "Nvme$subsystem", 01:30:15.496 "trtype": "$TEST_TRANSPORT", 01:30:15.496 "traddr": "$NVMF_FIRST_TARGET_IP", 01:30:15.496 "adrfam": "ipv4", 01:30:15.496 "trsvcid": "$NVMF_PORT", 01:30:15.496 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:30:15.496 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:30:15.496 "hdgst": ${hdgst:-false}, 01:30:15.496 "ddgst": ${ddgst:-false} 01:30:15.496 }, 01:30:15.496 "method": "bdev_nvme_attach_controller" 01:30:15.496 } 01:30:15.496 EOF 01:30:15.496 )") 01:30:15.496 05:24:57 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 01:30:15.496 05:24:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:30:15.496 05:24:57 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 01:30:15.496 05:24:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:30:15.496 05:24:57 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 01:30:15.496 05:24:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:30:15.496 05:24:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 01:30:15.496 05:24:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:30:15.496 05:24:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 01:30:15.496 05:24:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 01:30:15.496 05:24:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:30:15.496 05:24:57 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 01:30:15.496 05:24:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:30:15.496 05:24:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 01:30:15.496 05:24:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:30:15.496 05:24:57 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 01:30:15.496 05:24:57 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 01:30:15.496 05:24:57 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 01:30:15.496 05:24:57 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 01:30:15.496 05:24:57 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:30:15.496 "params": { 01:30:15.496 "name": "Nvme0", 01:30:15.496 "trtype": "tcp", 01:30:15.496 "traddr": "10.0.0.3", 01:30:15.496 "adrfam": "ipv4", 01:30:15.496 "trsvcid": "4420", 01:30:15.496 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:30:15.496 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:30:15.496 "hdgst": true, 01:30:15.496 "ddgst": true 01:30:15.496 }, 01:30:15.496 "method": "bdev_nvme_attach_controller" 01:30:15.496 }' 01:30:15.496 05:24:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 01:30:15.496 05:24:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:30:15.496 05:24:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:30:15.496 05:24:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 01:30:15.496 05:24:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:30:15.496 05:24:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:30:15.496 05:24:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 01:30:15.496 05:24:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:30:15.497 05:24:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 01:30:15.497 05:24:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:30:15.755 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 01:30:15.755 ... 01:30:15.755 fio-3.35 01:30:15.755 Starting 3 threads 01:30:27.956 01:30:27.956 filename0: (groupid=0, jobs=1): err= 0: pid=83653: Mon Dec 9 05:25:08 2024 01:30:27.956 read: IOPS=243, BW=30.4MiB/s (31.9MB/s)(304MiB/10007msec) 01:30:27.956 slat (nsec): min=6008, max=40738, avg=13781.68, stdev=4660.80 01:30:27.956 clat (usec): min=4753, max=14623, avg=12308.07, stdev=816.33 01:30:27.956 lat (usec): min=4780, max=14648, avg=12321.85, stdev=816.22 01:30:27.956 clat percentiles (usec): 01:30:27.956 | 1.00th=[10552], 5.00th=[10683], 10.00th=[11076], 20.00th=[11863], 01:30:27.956 | 30.00th=[12125], 40.00th=[12387], 50.00th=[12518], 60.00th=[12649], 01:30:27.956 | 70.00th=[12649], 80.00th=[12911], 90.00th=[13042], 95.00th=[13304], 01:30:27.956 | 99.00th=[13566], 99.50th=[13829], 99.90th=[14615], 99.95th=[14615], 01:30:27.956 | 99.99th=[14615] 01:30:27.956 bw ( KiB/s): min=29184, max=36096, per=33.19%, avg=30959.26, stdev=1537.73, samples=19 01:30:27.957 iops : min= 228, max= 282, avg=241.84, stdev=12.02, samples=19 01:30:27.957 lat (msec) : 10=0.37%, 20=99.63% 01:30:27.957 cpu : usr=92.74%, sys=6.84%, ctx=104, majf=0, minf=0 01:30:27.957 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:30:27.957 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:30:27.957 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:30:27.957 issued rwts: total=2433,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:30:27.957 latency : target=0, window=0, percentile=100.00%, depth=3 01:30:27.957 filename0: (groupid=0, jobs=1): err= 0: pid=83654: Mon Dec 9 05:25:08 2024 01:30:27.957 read: IOPS=242, BW=30.4MiB/s (31.8MB/s)(304MiB/10006msec) 01:30:27.957 slat (nsec): min=6247, max=41693, avg=14324.15, stdev=3919.04 01:30:27.957 clat (usec): min=8522, max=16048, avg=12321.18, stdev=837.02 01:30:27.957 lat (usec): min=8535, max=16062, avg=12335.51, stdev=837.25 01:30:27.957 clat percentiles (usec): 01:30:27.957 | 1.00th=[10028], 5.00th=[10552], 10.00th=[10945], 20.00th=[11863], 01:30:27.957 | 30.00th=[12125], 40.00th=[12387], 50.00th=[12518], 60.00th=[12649], 01:30:27.957 | 70.00th=[12649], 80.00th=[12911], 90.00th=[13042], 95.00th=[13304], 01:30:27.957 | 99.00th=[14091], 99.50th=[15270], 99.90th=[16057], 99.95th=[16057], 01:30:27.957 | 99.99th=[16057] 01:30:27.957 bw ( KiB/s): min=29184, max=35328, per=33.10%, avg=30881.68, stdev=1415.63, samples=19 01:30:27.957 iops : min= 228, max= 276, avg=241.26, stdev=11.06, samples=19 01:30:27.957 lat (msec) : 10=0.86%, 20=99.14% 01:30:27.957 cpu : usr=93.55%, sys=5.97%, ctx=191, majf=0, minf=0 01:30:27.957 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:30:27.957 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:30:27.957 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:30:27.957 issued rwts: total=2430,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:30:27.957 latency : target=0, window=0, percentile=100.00%, depth=3 01:30:27.957 filename0: (groupid=0, jobs=1): err= 0: pid=83655: Mon Dec 9 05:25:08 2024 01:30:27.957 read: IOPS=242, BW=30.4MiB/s (31.8MB/s)(304MiB/10006msec) 01:30:27.957 slat (nsec): min=6066, max=38058, avg=14184.92, stdev=3957.24 01:30:27.957 clat (usec): min=8523, max=14028, avg=12320.99, stdev=732.30 01:30:27.957 lat (usec): min=8535, max=14041, avg=12335.18, stdev=732.47 01:30:27.957 clat percentiles (usec): 01:30:27.957 | 1.00th=[10552], 5.00th=[10683], 10.00th=[11076], 20.00th=[11863], 01:30:27.957 | 30.00th=[12125], 40.00th=[12387], 50.00th=[12518], 60.00th=[12649], 01:30:27.957 | 70.00th=[12649], 80.00th=[12911], 90.00th=[13042], 95.00th=[13304], 01:30:27.957 | 99.00th=[13566], 99.50th=[13698], 99.90th=[13960], 99.95th=[13960], 01:30:27.957 | 99.99th=[14091] 01:30:27.957 bw ( KiB/s): min=29184, max=35328, per=33.10%, avg=30881.68, stdev=1415.63, samples=19 01:30:27.957 iops : min= 228, max= 276, avg=241.26, stdev=11.06, samples=19 01:30:27.957 lat (msec) : 10=0.12%, 20=99.88% 01:30:27.957 cpu : usr=92.73%, sys=6.86%, ctx=18, majf=0, minf=0 01:30:27.957 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:30:27.957 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:30:27.957 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:30:27.957 issued rwts: total=2430,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:30:27.957 latency : target=0, window=0, percentile=100.00%, depth=3 01:30:27.957 01:30:27.957 Run status group 0 (all jobs): 01:30:27.957 READ: bw=91.1MiB/s (95.5MB/s), 30.4MiB/s-30.4MiB/s (31.8MB/s-31.9MB/s), io=912MiB (956MB), run=10006-10007msec 01:30:27.957 05:25:08 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 01:30:27.957 05:25:08 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 01:30:27.957 05:25:08 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 01:30:27.957 05:25:08 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 01:30:27.957 05:25:08 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 01:30:27.957 05:25:08 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:30:27.957 05:25:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:27.957 05:25:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 01:30:27.957 05:25:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:27.957 05:25:08 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 01:30:27.957 05:25:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:27.957 05:25:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 01:30:27.957 05:25:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:27.957 01:30:27.957 real 0m11.083s 01:30:27.957 user 0m28.670s 01:30:27.957 sys 0m2.265s 01:30:27.957 05:25:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 01:30:27.957 05:25:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 01:30:27.957 ************************************ 01:30:27.957 END TEST fio_dif_digest 01:30:27.957 ************************************ 01:30:27.957 05:25:08 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 01:30:27.957 05:25:08 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 01:30:27.957 05:25:08 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 01:30:27.957 05:25:08 nvmf_dif -- nvmf/common.sh@121 -- # sync 01:30:27.957 05:25:09 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:30:27.957 05:25:09 nvmf_dif -- nvmf/common.sh@124 -- # set +e 01:30:27.957 05:25:09 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 01:30:27.957 05:25:09 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:30:27.957 rmmod nvme_tcp 01:30:27.957 rmmod nvme_fabrics 01:30:27.957 rmmod nvme_keyring 01:30:27.957 05:25:09 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:30:27.957 05:25:09 nvmf_dif -- nvmf/common.sh@128 -- # set -e 01:30:27.957 05:25:09 nvmf_dif -- nvmf/common.sh@129 -- # return 0 01:30:27.957 05:25:09 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 82887 ']' 01:30:27.957 05:25:09 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 82887 01:30:27.957 05:25:09 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 82887 ']' 01:30:27.957 05:25:09 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 82887 01:30:27.957 05:25:09 nvmf_dif -- common/autotest_common.sh@959 -- # uname 01:30:27.957 05:25:09 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:30:27.957 05:25:09 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82887 01:30:27.957 05:25:09 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:30:27.957 05:25:09 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:30:27.957 05:25:09 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82887' 01:30:27.957 killing process with pid 82887 01:30:27.957 05:25:09 nvmf_dif -- common/autotest_common.sh@973 -- # kill 82887 01:30:27.957 05:25:09 nvmf_dif -- common/autotest_common.sh@978 -- # wait 82887 01:30:27.957 05:25:09 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 01:30:27.957 05:25:09 nvmf_dif -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 01:30:27.957 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:30:27.957 Waiting for block devices as requested 01:30:27.957 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 01:30:27.957 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 01:30:27.957 05:25:10 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:30:27.957 05:25:10 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:30:27.957 05:25:10 nvmf_dif -- nvmf/common.sh@297 -- # iptr 01:30:27.957 05:25:10 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 01:30:27.957 05:25:10 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:30:27.957 05:25:10 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 01:30:27.957 05:25:10 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:30:27.957 05:25:10 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:30:27.957 05:25:10 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:30:27.957 05:25:10 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:30:27.957 05:25:10 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:30:27.957 05:25:10 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:30:27.957 05:25:10 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:30:27.957 05:25:10 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:30:27.957 05:25:10 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:30:27.957 05:25:10 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:30:27.957 05:25:10 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:30:27.957 05:25:10 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:30:27.957 05:25:10 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:30:27.957 05:25:10 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:30:28.216 05:25:10 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:30:28.216 05:25:10 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 01:30:28.216 05:25:10 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:30:28.216 05:25:10 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 01:30:28.216 05:25:10 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:30:28.216 05:25:10 nvmf_dif -- nvmf/common.sh@300 -- # return 0 01:30:28.216 01:30:28.216 real 1m1.264s 01:30:28.216 user 3m53.794s 01:30:28.216 sys 0m16.807s 01:30:28.216 05:25:10 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 01:30:28.216 05:25:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:30:28.216 ************************************ 01:30:28.216 END TEST nvmf_dif 01:30:28.216 ************************************ 01:30:28.216 05:25:10 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 01:30:28.216 05:25:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:30:28.216 05:25:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:30:28.216 05:25:10 -- common/autotest_common.sh@10 -- # set +x 01:30:28.216 ************************************ 01:30:28.216 START TEST nvmf_abort_qd_sizes 01:30:28.216 ************************************ 01:30:28.216 05:25:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 01:30:28.216 * Looking for test storage... 01:30:28.476 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:30:28.476 05:25:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:30:28.476 05:25:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 01:30:28.476 05:25:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:30:28.476 05:25:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:30:28.476 05:25:10 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:30:28.476 05:25:10 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 01:30:28.476 05:25:10 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 01:30:28.476 05:25:10 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 01:30:28.476 05:25:10 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 01:30:28.476 05:25:10 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 01:30:28.476 05:25:10 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 01:30:28.476 05:25:10 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 01:30:28.476 05:25:10 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 01:30:28.476 05:25:10 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 01:30:28.476 05:25:10 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:30:28.476 05:25:10 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 01:30:28.476 05:25:10 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 01:30:28.476 05:25:10 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 01:30:28.476 05:25:10 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:30:28.476 05:25:10 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 01:30:28.476 05:25:10 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 01:30:28.476 05:25:10 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:30:28.476 05:25:10 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 01:30:28.476 05:25:10 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 01:30:28.476 05:25:10 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 01:30:28.476 05:25:10 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 01:30:28.476 05:25:10 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:30:28.476 05:25:10 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 01:30:28.476 05:25:10 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 01:30:28.476 05:25:10 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:30:28.476 05:25:10 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:30:28.476 05:25:10 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 01:30:28.476 05:25:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:30:28.476 05:25:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:30:28.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:30:28.476 --rc genhtml_branch_coverage=1 01:30:28.476 --rc genhtml_function_coverage=1 01:30:28.476 --rc genhtml_legend=1 01:30:28.476 --rc geninfo_all_blocks=1 01:30:28.476 --rc geninfo_unexecuted_blocks=1 01:30:28.476 01:30:28.476 ' 01:30:28.476 05:25:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:30:28.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:30:28.476 --rc genhtml_branch_coverage=1 01:30:28.476 --rc genhtml_function_coverage=1 01:30:28.476 --rc genhtml_legend=1 01:30:28.476 --rc geninfo_all_blocks=1 01:30:28.476 --rc geninfo_unexecuted_blocks=1 01:30:28.476 01:30:28.476 ' 01:30:28.476 05:25:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:30:28.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:30:28.476 --rc genhtml_branch_coverage=1 01:30:28.476 --rc genhtml_function_coverage=1 01:30:28.476 --rc genhtml_legend=1 01:30:28.476 --rc geninfo_all_blocks=1 01:30:28.476 --rc geninfo_unexecuted_blocks=1 01:30:28.476 01:30:28.476 ' 01:30:28.476 05:25:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:30:28.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:30:28.476 --rc genhtml_branch_coverage=1 01:30:28.476 --rc genhtml_function_coverage=1 01:30:28.476 --rc genhtml_legend=1 01:30:28.476 --rc geninfo_all_blocks=1 01:30:28.476 --rc geninfo_unexecuted_blocks=1 01:30:28.476 01:30:28.476 ' 01:30:28.476 05:25:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:30:28.476 05:25:10 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 01:30:28.476 05:25:10 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:30:28.476 05:25:10 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:30:28.476 05:25:10 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:30:28.476 05:25:10 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:30:28.476 05:25:10 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:30:28.476 05:25:10 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:30:28.476 05:25:10 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:30:28.476 05:25:10 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:30:28.476 05:25:10 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:30:28.476 05:25:10 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:30:28.476 05:25:10 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:30:28.476 05:25:10 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:30:28.476 05:25:10 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:30:28.476 05:25:10 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:30:28.476 05:25:10 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:30:28.476 05:25:10 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:30:28.476 05:25:10 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:30:28.476 05:25:10 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 01:30:28.476 05:25:10 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:30:28.476 05:25:10 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:30:28.476 05:25:10 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:30:28.476 05:25:10 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:30:28.476 05:25:10 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:30:28.476 05:25:10 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:30:28.476 05:25:10 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 01:30:28.476 05:25:10 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:30:28.476 05:25:10 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 01:30:28.476 05:25:10 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:30:28.477 05:25:10 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:30:28.477 05:25:10 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:30:28.477 05:25:10 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:30:28.477 05:25:10 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:30:28.477 05:25:10 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:30:28.477 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:30:28.477 05:25:10 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:30:28.477 05:25:10 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:30:28.477 05:25:10 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 01:30:28.477 05:25:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 01:30:28.477 05:25:10 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:30:28.477 05:25:10 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:30:28.477 05:25:10 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 01:30:28.477 05:25:10 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 01:30:28.477 05:25:10 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 01:30:28.477 05:25:10 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:30:28.477 05:25:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 01:30:28.477 05:25:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:30:28.477 05:25:10 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:30:28.477 05:25:10 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:30:28.477 05:25:10 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:30:28.477 05:25:10 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:30:28.477 05:25:10 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:30:28.477 05:25:10 nvmf_abort_qd_sizes -- nvmf/common.sh@460 -- # nvmf_veth_init 01:30:28.477 05:25:10 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:30:28.477 05:25:10 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:30:28.477 05:25:10 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:30:28.477 05:25:10 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:30:28.477 05:25:10 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:30:28.477 05:25:10 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:30:28.477 05:25:10 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:30:28.477 05:25:10 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:30:28.477 05:25:10 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:30:28.477 05:25:10 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:30:28.477 05:25:10 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:30:28.477 05:25:10 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:30:28.477 05:25:10 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:30:28.477 05:25:10 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:30:28.477 05:25:10 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:30:28.477 05:25:10 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:30:28.477 05:25:10 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:30:28.477 Cannot find device "nvmf_init_br" 01:30:28.477 05:25:10 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 01:30:28.477 05:25:10 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:30:28.477 Cannot find device "nvmf_init_br2" 01:30:28.477 05:25:10 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 01:30:28.477 05:25:10 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:30:28.477 Cannot find device "nvmf_tgt_br" 01:30:28.477 05:25:10 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 01:30:28.477 05:25:10 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:30:28.477 Cannot find device "nvmf_tgt_br2" 01:30:28.477 05:25:10 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 01:30:28.477 05:25:10 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:30:28.477 Cannot find device "nvmf_init_br" 01:30:28.477 05:25:10 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 01:30:28.477 05:25:10 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:30:28.477 Cannot find device "nvmf_init_br2" 01:30:28.477 05:25:10 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 01:30:28.477 05:25:10 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:30:28.736 Cannot find device "nvmf_tgt_br" 01:30:28.736 05:25:10 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 01:30:28.736 05:25:10 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:30:28.736 Cannot find device "nvmf_tgt_br2" 01:30:28.736 05:25:10 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 01:30:28.736 05:25:10 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:30:28.736 Cannot find device "nvmf_br" 01:30:28.736 05:25:10 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 01:30:28.736 05:25:10 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:30:28.736 Cannot find device "nvmf_init_if" 01:30:28.736 05:25:10 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 01:30:28.736 05:25:10 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:30:28.736 Cannot find device "nvmf_init_if2" 01:30:28.736 05:25:10 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 01:30:28.736 05:25:10 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:30:28.736 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:30:28.736 05:25:11 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 01:30:28.736 05:25:11 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:30:28.736 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:30:28.736 05:25:11 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 01:30:28.736 05:25:11 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:30:28.736 05:25:11 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:30:28.736 05:25:11 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:30:28.736 05:25:11 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:30:28.736 05:25:11 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:30:28.736 05:25:11 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:30:28.736 05:25:11 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:30:28.736 05:25:11 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:30:28.736 05:25:11 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:30:28.736 05:25:11 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:30:28.736 05:25:11 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:30:28.736 05:25:11 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:30:28.736 05:25:11 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:30:28.736 05:25:11 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:30:28.736 05:25:11 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:30:28.736 05:25:11 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:30:28.736 05:25:11 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:30:28.736 05:25:11 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:30:28.736 05:25:11 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:30:28.736 05:25:11 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:30:28.736 05:25:11 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:30:28.736 05:25:11 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:30:28.737 05:25:11 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:30:28.737 05:25:11 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:30:28.737 05:25:11 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:30:28.737 05:25:11 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:30:28.737 05:25:11 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:30:28.737 05:25:11 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:30:28.737 05:25:11 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:30:28.737 05:25:11 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:30:28.737 05:25:11 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:30:28.737 05:25:11 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:30:28.737 05:25:11 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:30:28.737 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:30:28.737 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 01:30:28.737 01:30:28.737 --- 10.0.0.3 ping statistics --- 01:30:28.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:30:28.737 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 01:30:28.737 05:25:11 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:30:28.737 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:30:28.737 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 01:30:28.737 01:30:28.737 --- 10.0.0.4 ping statistics --- 01:30:28.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:30:28.737 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 01:30:28.737 05:25:11 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:30:28.737 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:30:28.737 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 01:30:28.737 01:30:28.737 --- 10.0.0.1 ping statistics --- 01:30:28.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:30:28.737 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 01:30:28.737 05:25:11 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:30:28.737 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:30:28.737 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.036 ms 01:30:28.737 01:30:28.737 --- 10.0.0.2 ping statistics --- 01:30:28.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:30:28.737 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 01:30:28.737 05:25:11 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:30:28.737 05:25:11 nvmf_abort_qd_sizes -- nvmf/common.sh@461 -- # return 0 01:30:28.737 05:25:11 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 01:30:28.737 05:25:11 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:30:29.744 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:30:29.744 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 01:30:29.744 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 01:30:29.744 05:25:12 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:30:29.744 05:25:12 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:30:29.744 05:25:12 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:30:29.744 05:25:12 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:30:29.744 05:25:12 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:30:29.744 05:25:12 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:30:29.744 05:25:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 01:30:29.744 05:25:12 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:30:29.744 05:25:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 01:30:29.744 05:25:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 01:30:29.744 05:25:12 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=84317 01:30:29.744 05:25:12 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 01:30:29.744 05:25:12 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 84317 01:30:29.744 05:25:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 84317 ']' 01:30:29.744 05:25:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:30:29.744 05:25:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 01:30:29.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:30:29.744 05:25:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:30:29.744 05:25:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 01:30:29.744 05:25:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 01:30:29.744 [2024-12-09 05:25:12.196852] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:30:29.744 [2024-12-09 05:25:12.196915] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:30:30.002 [2024-12-09 05:25:12.349472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:30:30.002 [2024-12-09 05:25:12.422640] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:30:30.002 [2024-12-09 05:25:12.422688] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:30:30.002 [2024-12-09 05:25:12.422694] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:30:30.002 [2024-12-09 05:25:12.422699] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:30:30.002 [2024-12-09 05:25:12.422703] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:30:30.002 [2024-12-09 05:25:12.424062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:30:30.002 [2024-12-09 05:25:12.424211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:30:30.002 [2024-12-09 05:25:12.424364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:30:30.002 [2024-12-09 05:25:12.424347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:30:30.261 [2024-12-09 05:25:12.500982] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:30:30.828 05:25:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:30:30.828 05:25:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 01:30:30.828 05:25:13 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:30:30.828 05:25:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 01:30:30.828 05:25:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 01:30:30.828 05:25:13 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:30:30.828 05:25:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 01:30:30.828 05:25:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 01:30:30.828 05:25:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 01:30:30.828 05:25:13 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 01:30:30.828 05:25:13 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 01:30:30.828 05:25:13 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 01:30:30.828 05:25:13 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 01:30:30.828 05:25:13 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 01:30:30.828 05:25:13 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 01:30:30.828 05:25:13 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 01:30:30.828 05:25:13 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 01:30:30.828 05:25:13 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 01:30:30.828 05:25:13 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 01:30:30.828 05:25:13 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 01:30:30.829 05:25:13 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 01:30:30.829 05:25:13 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 01:30:30.829 05:25:13 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 01:30:30.829 05:25:13 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 01:30:30.829 05:25:13 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 01:30:30.829 05:25:13 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 01:30:30.829 05:25:13 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 01:30:30.829 05:25:13 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 01:30:30.829 05:25:13 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 01:30:30.829 05:25:13 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 01:30:30.829 05:25:13 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 01:30:30.829 05:25:13 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 01:30:30.829 05:25:13 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 01:30:30.829 05:25:13 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 01:30:30.829 05:25:13 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 01:30:30.829 05:25:13 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 01:30:30.829 05:25:13 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 01:30:30.829 05:25:13 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 01:30:30.829 05:25:13 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 01:30:30.829 05:25:13 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 01:30:30.829 05:25:13 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 01:30:30.829 05:25:13 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 01:30:30.829 05:25:13 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 01:30:30.829 05:25:13 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 01:30:30.829 05:25:13 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 01:30:30.829 05:25:13 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 01:30:30.829 05:25:13 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 01:30:30.829 05:25:13 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 01:30:30.829 05:25:13 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 01:30:30.829 05:25:13 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 01:30:30.829 05:25:13 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 01:30:30.829 05:25:13 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 01:30:30.829 05:25:13 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 01:30:30.829 05:25:13 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 01:30:30.829 05:25:13 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 01:30:30.829 05:25:13 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 01:30:30.829 05:25:13 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 01:30:30.829 05:25:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 01:30:30.829 05:25:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 01:30:30.829 05:25:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 01:30:30.829 05:25:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:30:30.829 05:25:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 01:30:30.829 05:25:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 01:30:30.829 ************************************ 01:30:30.829 START TEST spdk_target_abort 01:30:30.829 ************************************ 01:30:30.829 05:25:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 01:30:30.829 05:25:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 01:30:30.829 05:25:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 01:30:30.829 05:25:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:30.829 05:25:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:30:30.829 spdk_targetn1 01:30:30.829 05:25:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:30.829 05:25:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:30:30.829 05:25:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:30.829 05:25:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:30:30.829 [2024-12-09 05:25:13.245118] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:30:30.829 05:25:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:30.829 05:25:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 01:30:30.829 05:25:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:30.829 05:25:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:30:30.829 05:25:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:30.829 05:25:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 01:30:30.829 05:25:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:30.829 05:25:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:30:31.088 05:25:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:31.088 05:25:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 01:30:31.088 05:25:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:31.088 05:25:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:30:31.088 [2024-12-09 05:25:13.294885] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:30:31.088 05:25:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:31.088 05:25:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 01:30:31.088 05:25:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 01:30:31.088 05:25:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 01:30:31.088 05:25:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 01:30:31.088 05:25:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 01:30:31.088 05:25:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 01:30:31.088 05:25:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 01:30:31.088 05:25:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 01:30:31.088 05:25:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 01:30:31.088 05:25:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:30:31.088 05:25:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 01:30:31.088 05:25:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:30:31.088 05:25:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 01:30:31.088 05:25:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:30:31.088 05:25:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 01:30:31.088 05:25:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:30:31.088 05:25:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 01:30:31.088 05:25:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:30:31.088 05:25:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:30:31.088 05:25:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 01:30:31.088 05:25:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:30:34.375 Initializing NVMe Controllers 01:30:34.375 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 01:30:34.375 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 01:30:34.375 Initialization complete. Launching workers. 01:30:34.375 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 14582, failed: 0 01:30:34.375 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1064, failed to submit 13518 01:30:34.375 success 847, unsuccessful 217, failed 0 01:30:34.375 05:25:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 01:30:34.375 05:25:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:30:37.662 Initializing NVMe Controllers 01:30:37.662 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 01:30:37.662 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 01:30:37.662 Initialization complete. Launching workers. 01:30:37.662 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9010, failed: 0 01:30:37.662 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1162, failed to submit 7848 01:30:37.662 success 391, unsuccessful 771, failed 0 01:30:37.920 05:25:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 01:30:37.920 05:25:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:30:41.212 Initializing NVMe Controllers 01:30:41.212 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 01:30:41.212 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 01:30:41.212 Initialization complete. Launching workers. 01:30:41.212 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 32451, failed: 0 01:30:41.212 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2351, failed to submit 30100 01:30:41.212 success 462, unsuccessful 1889, failed 0 01:30:41.212 05:25:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 01:30:41.212 05:25:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:41.212 05:25:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:30:41.213 05:25:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:41.213 05:25:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 01:30:41.213 05:25:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 01:30:41.213 05:25:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:30:42.599 05:25:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:30:42.599 05:25:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 84317 01:30:42.599 05:25:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 84317 ']' 01:30:42.599 05:25:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 84317 01:30:42.599 05:25:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 01:30:42.599 05:25:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:30:42.599 05:25:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84317 01:30:42.599 05:25:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:30:42.599 05:25:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:30:42.599 killing process with pid 84317 01:30:42.599 05:25:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84317' 01:30:42.599 05:25:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 84317 01:30:42.599 05:25:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 84317 01:30:42.858 01:30:42.858 real 0m12.101s 01:30:42.858 user 0m48.819s 01:30:42.858 sys 0m2.015s 01:30:42.858 05:25:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 01:30:42.858 05:25:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:30:42.858 ************************************ 01:30:42.858 END TEST spdk_target_abort 01:30:42.858 ************************************ 01:30:43.116 05:25:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 01:30:43.116 05:25:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:30:43.116 05:25:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 01:30:43.116 05:25:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 01:30:43.116 ************************************ 01:30:43.116 START TEST kernel_target_abort 01:30:43.116 ************************************ 01:30:43.116 05:25:25 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 01:30:43.116 05:25:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 01:30:43.116 05:25:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 01:30:43.116 05:25:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 01:30:43.116 05:25:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 01:30:43.116 05:25:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:30:43.116 05:25:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:30:43.116 05:25:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:30:43.116 05:25:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:30:43.116 05:25:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:30:43.116 05:25:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:30:43.116 05:25:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:30:43.116 05:25:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 01:30:43.116 05:25:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 01:30:43.116 05:25:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 01:30:43.116 05:25:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 01:30:43.116 05:25:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 01:30:43.116 05:25:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 01:30:43.116 05:25:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 01:30:43.116 05:25:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 01:30:43.116 05:25:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 01:30:43.116 05:25:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 01:30:43.117 05:25:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 01:30:43.374 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:30:43.632 Waiting for block devices as requested 01:30:43.632 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 01:30:43.632 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 01:30:43.632 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 01:30:43.632 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 01:30:43.632 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 01:30:43.632 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 01:30:43.633 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 01:30:43.633 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:30:43.633 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 01:30:43.633 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 01:30:43.633 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 01:30:43.892 No valid GPT data, bailing 01:30:43.892 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 01:30:43.892 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 01:30:43.892 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 01:30:43.892 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 01:30:43.892 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 01:30:43.892 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 01:30:43.892 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 01:30:43.892 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n2 01:30:43.892 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 01:30:43.892 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:30:43.892 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n2 01:30:43.892 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 01:30:43.892 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 01:30:43.892 No valid GPT data, bailing 01:30:43.892 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 01:30:43.892 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 01:30:43.892 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 01:30:43.892 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 01:30:43.892 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 01:30:43.892 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 01:30:43.892 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 01:30:43.892 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n3 01:30:43.892 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 01:30:43.892 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:30:43.892 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n3 01:30:43.892 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 01:30:43.892 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 01:30:43.892 No valid GPT data, bailing 01:30:43.892 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 01:30:43.892 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 01:30:43.892 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 01:30:43.892 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 01:30:43.892 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 01:30:43.892 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 01:30:43.892 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 01:30:43.892 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme1n1 01:30:43.892 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 01:30:43.892 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:30:43.892 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme1n1 01:30:43.892 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 01:30:43.892 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 01:30:43.892 No valid GPT data, bailing 01:30:43.892 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 01:30:44.152 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 01:30:44.152 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 01:30:44.152 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 01:30:44.152 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 01:30:44.152 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 01:30:44.152 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 01:30:44.152 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 01:30:44.152 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 01:30:44.152 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 01:30:44.152 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 01:30:44.152 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 01:30:44.152 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 01:30:44.152 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 01:30:44.152 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 01:30:44.152 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 01:30:44.152 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 01:30:44.152 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b --hostid=0567fff2-ddaf-4a4f-877d-a2600d7e662b -a 10.0.0.1 -t tcp -s 4420 01:30:44.152 01:30:44.152 Discovery Log Number of Records 2, Generation counter 2 01:30:44.152 =====Discovery Log Entry 0====== 01:30:44.152 trtype: tcp 01:30:44.152 adrfam: ipv4 01:30:44.152 subtype: current discovery subsystem 01:30:44.152 treq: not specified, sq flow control disable supported 01:30:44.152 portid: 1 01:30:44.152 trsvcid: 4420 01:30:44.152 subnqn: nqn.2014-08.org.nvmexpress.discovery 01:30:44.152 traddr: 10.0.0.1 01:30:44.152 eflags: none 01:30:44.152 sectype: none 01:30:44.152 =====Discovery Log Entry 1====== 01:30:44.152 trtype: tcp 01:30:44.152 adrfam: ipv4 01:30:44.152 subtype: nvme subsystem 01:30:44.152 treq: not specified, sq flow control disable supported 01:30:44.152 portid: 1 01:30:44.152 trsvcid: 4420 01:30:44.152 subnqn: nqn.2016-06.io.spdk:testnqn 01:30:44.152 traddr: 10.0.0.1 01:30:44.152 eflags: none 01:30:44.152 sectype: none 01:30:44.152 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 01:30:44.152 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 01:30:44.152 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 01:30:44.152 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 01:30:44.152 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 01:30:44.152 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 01:30:44.152 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 01:30:44.152 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 01:30:44.152 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 01:30:44.152 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:30:44.152 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 01:30:44.152 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:30:44.152 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 01:30:44.153 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:30:44.153 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 01:30:44.153 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:30:44.153 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 01:30:44.153 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:30:44.153 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:30:44.153 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 01:30:44.153 05:25:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:30:47.443 Initializing NVMe Controllers 01:30:47.443 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 01:30:47.443 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 01:30:47.443 Initialization complete. Launching workers. 01:30:47.443 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 39794, failed: 0 01:30:47.443 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 39794, failed to submit 0 01:30:47.443 success 0, unsuccessful 39794, failed 0 01:30:47.443 05:25:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 01:30:47.443 05:25:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:30:50.740 Initializing NVMe Controllers 01:30:50.740 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 01:30:50.740 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 01:30:50.740 Initialization complete. Launching workers. 01:30:50.740 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 85396, failed: 0 01:30:50.740 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36360, failed to submit 49036 01:30:50.740 success 0, unsuccessful 36360, failed 0 01:30:50.740 05:25:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 01:30:50.740 05:25:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:30:54.024 Initializing NVMe Controllers 01:30:54.024 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 01:30:54.024 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 01:30:54.024 Initialization complete. Launching workers. 01:30:54.024 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 108534, failed: 0 01:30:54.024 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 27132, failed to submit 81402 01:30:54.024 success 0, unsuccessful 27132, failed 0 01:30:54.024 05:25:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 01:30:54.024 05:25:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 01:30:54.024 05:25:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 01:30:54.024 05:25:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 01:30:54.024 05:25:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 01:30:54.024 05:25:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 01:30:54.024 05:25:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 01:30:54.024 05:25:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 01:30:54.024 05:25:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 01:30:54.024 05:25:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:30:54.592 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:31:04.585 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 01:31:04.585 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 01:31:04.585 01:31:04.585 real 0m20.387s 01:31:04.585 user 0m7.731s 01:31:04.585 sys 0m10.261s 01:31:04.585 05:25:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 01:31:04.585 05:25:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 01:31:04.585 ************************************ 01:31:04.585 END TEST kernel_target_abort 01:31:04.585 ************************************ 01:31:04.585 05:25:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 01:31:04.585 05:25:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 01:31:04.585 05:25:45 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 01:31:04.585 05:25:45 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 01:31:04.585 05:25:45 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:31:04.585 05:25:45 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 01:31:04.585 05:25:45 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 01:31:04.585 05:25:45 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:31:04.585 rmmod nvme_tcp 01:31:04.585 rmmod nvme_fabrics 01:31:04.585 rmmod nvme_keyring 01:31:04.585 05:25:45 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:31:04.585 05:25:45 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 01:31:04.585 05:25:45 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 01:31:04.585 05:25:45 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 84317 ']' 01:31:04.585 05:25:45 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 84317 01:31:04.585 05:25:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 84317 ']' 01:31:04.585 05:25:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 84317 01:31:04.585 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (84317) - No such process 01:31:04.585 05:25:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 84317 is not found' 01:31:04.585 Process with pid 84317 is not found 01:31:04.585 05:25:45 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 01:31:04.585 05:25:45 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 01:31:04.585 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:31:04.585 Waiting for block devices as requested 01:31:04.585 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 01:31:04.585 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 01:31:04.585 05:25:46 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:31:04.585 05:25:46 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:31:04.585 05:25:46 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 01:31:04.585 05:25:46 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 01:31:04.585 05:25:46 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:31:04.585 05:25:46 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 01:31:04.585 05:25:46 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:31:04.585 05:25:46 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:31:04.585 05:25:46 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:31:04.585 05:25:46 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:31:04.585 05:25:46 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:31:04.585 05:25:46 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:31:04.585 05:25:46 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:31:04.585 05:25:46 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:31:04.585 05:25:46 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:31:04.585 05:25:46 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:31:04.585 05:25:46 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:31:04.585 05:25:46 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:31:04.585 05:25:46 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:31:04.585 05:25:46 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:31:04.585 05:25:46 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:31:04.585 05:25:46 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 01:31:04.585 05:25:46 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:31:04.585 05:25:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 01:31:04.585 05:25:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:31:04.585 05:25:46 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 01:31:04.585 ************************************ 01:31:04.585 END TEST nvmf_abort_qd_sizes 01:31:04.585 ************************************ 01:31:04.585 01:31:04.585 real 0m36.207s 01:31:04.585 user 0m57.798s 01:31:04.585 sys 0m14.007s 01:31:04.585 05:25:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 01:31:04.585 05:25:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 01:31:04.585 05:25:46 -- spdk/autotest.sh@292 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 01:31:04.585 05:25:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:31:04.585 05:25:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:31:04.585 05:25:46 -- common/autotest_common.sh@10 -- # set +x 01:31:04.585 ************************************ 01:31:04.585 START TEST keyring_file 01:31:04.585 ************************************ 01:31:04.585 05:25:46 keyring_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 01:31:04.585 * Looking for test storage... 01:31:04.585 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 01:31:04.585 05:25:46 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:31:04.585 05:25:46 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 01:31:04.585 05:25:46 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:31:04.585 05:25:46 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:31:04.585 05:25:46 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:31:04.585 05:25:46 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 01:31:04.585 05:25:46 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 01:31:04.585 05:25:46 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 01:31:04.585 05:25:46 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 01:31:04.585 05:25:46 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 01:31:04.585 05:25:46 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 01:31:04.585 05:25:46 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 01:31:04.585 05:25:46 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 01:31:04.585 05:25:46 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 01:31:04.585 05:25:46 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:31:04.585 05:25:46 keyring_file -- scripts/common.sh@344 -- # case "$op" in 01:31:04.585 05:25:46 keyring_file -- scripts/common.sh@345 -- # : 1 01:31:04.585 05:25:46 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 01:31:04.585 05:25:46 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:31:04.585 05:25:46 keyring_file -- scripts/common.sh@365 -- # decimal 1 01:31:04.585 05:25:46 keyring_file -- scripts/common.sh@353 -- # local d=1 01:31:04.585 05:25:46 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:31:04.585 05:25:46 keyring_file -- scripts/common.sh@355 -- # echo 1 01:31:04.585 05:25:46 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 01:31:04.585 05:25:46 keyring_file -- scripts/common.sh@366 -- # decimal 2 01:31:04.585 05:25:47 keyring_file -- scripts/common.sh@353 -- # local d=2 01:31:04.585 05:25:47 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:31:04.585 05:25:47 keyring_file -- scripts/common.sh@355 -- # echo 2 01:31:04.585 05:25:47 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 01:31:04.585 05:25:47 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:31:04.585 05:25:47 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:31:04.586 05:25:47 keyring_file -- scripts/common.sh@368 -- # return 0 01:31:04.586 05:25:47 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:31:04.586 05:25:47 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:31:04.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:31:04.586 --rc genhtml_branch_coverage=1 01:31:04.586 --rc genhtml_function_coverage=1 01:31:04.586 --rc genhtml_legend=1 01:31:04.586 --rc geninfo_all_blocks=1 01:31:04.586 --rc geninfo_unexecuted_blocks=1 01:31:04.586 01:31:04.586 ' 01:31:04.586 05:25:47 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:31:04.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:31:04.586 --rc genhtml_branch_coverage=1 01:31:04.586 --rc genhtml_function_coverage=1 01:31:04.586 --rc genhtml_legend=1 01:31:04.586 --rc geninfo_all_blocks=1 01:31:04.586 --rc geninfo_unexecuted_blocks=1 01:31:04.586 01:31:04.586 ' 01:31:04.586 05:25:47 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:31:04.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:31:04.586 --rc genhtml_branch_coverage=1 01:31:04.586 --rc genhtml_function_coverage=1 01:31:04.586 --rc genhtml_legend=1 01:31:04.586 --rc geninfo_all_blocks=1 01:31:04.586 --rc geninfo_unexecuted_blocks=1 01:31:04.586 01:31:04.586 ' 01:31:04.586 05:25:47 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:31:04.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:31:04.586 --rc genhtml_branch_coverage=1 01:31:04.586 --rc genhtml_function_coverage=1 01:31:04.586 --rc genhtml_legend=1 01:31:04.586 --rc geninfo_all_blocks=1 01:31:04.586 --rc geninfo_unexecuted_blocks=1 01:31:04.586 01:31:04.586 ' 01:31:04.586 05:25:47 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 01:31:04.586 05:25:47 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:31:04.586 05:25:47 keyring_file -- nvmf/common.sh@7 -- # uname -s 01:31:04.586 05:25:47 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:31:04.586 05:25:47 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:31:04.586 05:25:47 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:31:04.586 05:25:47 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:31:04.586 05:25:47 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:31:04.586 05:25:47 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:31:04.586 05:25:47 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:31:04.586 05:25:47 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:31:04.586 05:25:47 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:31:04.586 05:25:47 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:31:04.586 05:25:47 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:31:04.586 05:25:47 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:31:04.586 05:25:47 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:31:04.586 05:25:47 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:31:04.586 05:25:47 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:31:04.586 05:25:47 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:31:04.586 05:25:47 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:31:04.586 05:25:47 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 01:31:04.586 05:25:47 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:31:04.586 05:25:47 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:31:04.586 05:25:47 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:31:04.586 05:25:47 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:31:04.586 05:25:47 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:31:04.846 05:25:47 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:31:04.846 05:25:47 keyring_file -- paths/export.sh@5 -- # export PATH 01:31:04.846 05:25:47 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:31:04.846 05:25:47 keyring_file -- nvmf/common.sh@51 -- # : 0 01:31:04.846 05:25:47 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:31:04.846 05:25:47 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:31:04.846 05:25:47 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:31:04.846 05:25:47 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:31:04.846 05:25:47 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:31:04.846 05:25:47 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:31:04.846 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:31:04.846 05:25:47 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:31:04.846 05:25:47 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:31:04.846 05:25:47 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 01:31:04.846 05:25:47 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 01:31:04.846 05:25:47 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 01:31:04.846 05:25:47 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 01:31:04.846 05:25:47 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 01:31:04.846 05:25:47 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 01:31:04.846 05:25:47 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 01:31:04.846 05:25:47 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 01:31:04.846 05:25:47 keyring_file -- keyring/common.sh@15 -- # local name key digest path 01:31:04.846 05:25:47 keyring_file -- keyring/common.sh@17 -- # name=key0 01:31:04.846 05:25:47 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 01:31:04.846 05:25:47 keyring_file -- keyring/common.sh@17 -- # digest=0 01:31:04.846 05:25:47 keyring_file -- keyring/common.sh@18 -- # mktemp 01:31:04.846 05:25:47 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.oFWtQsZubp 01:31:04.846 05:25:47 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 01:31:04.846 05:25:47 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 01:31:04.846 05:25:47 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 01:31:04.846 05:25:47 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 01:31:04.846 05:25:47 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 01:31:04.846 05:25:47 keyring_file -- nvmf/common.sh@732 -- # digest=0 01:31:04.846 05:25:47 keyring_file -- nvmf/common.sh@733 -- # python - 01:31:04.846 05:25:47 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.oFWtQsZubp 01:31:04.846 05:25:47 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.oFWtQsZubp 01:31:04.846 05:25:47 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.oFWtQsZubp 01:31:04.846 05:25:47 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 01:31:04.846 05:25:47 keyring_file -- keyring/common.sh@15 -- # local name key digest path 01:31:04.846 05:25:47 keyring_file -- keyring/common.sh@17 -- # name=key1 01:31:04.846 05:25:47 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 01:31:04.846 05:25:47 keyring_file -- keyring/common.sh@17 -- # digest=0 01:31:04.846 05:25:47 keyring_file -- keyring/common.sh@18 -- # mktemp 01:31:04.846 05:25:47 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.DuYmdIl1O1 01:31:04.846 05:25:47 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 01:31:04.846 05:25:47 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 01:31:04.846 05:25:47 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 01:31:04.846 05:25:47 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 01:31:04.846 05:25:47 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 01:31:04.846 05:25:47 keyring_file -- nvmf/common.sh@732 -- # digest=0 01:31:04.846 05:25:47 keyring_file -- nvmf/common.sh@733 -- # python - 01:31:04.846 05:25:47 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.DuYmdIl1O1 01:31:04.846 05:25:47 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.DuYmdIl1O1 01:31:04.846 05:25:47 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.DuYmdIl1O1 01:31:04.846 05:25:47 keyring_file -- keyring/file.sh@30 -- # tgtpid=85327 01:31:04.846 05:25:47 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:31:04.846 05:25:47 keyring_file -- keyring/file.sh@32 -- # waitforlisten 85327 01:31:04.846 05:25:47 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 85327 ']' 01:31:04.846 05:25:47 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:31:04.846 05:25:47 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 01:31:04.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:31:04.846 05:25:47 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:31:04.846 05:25:47 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 01:31:04.846 05:25:47 keyring_file -- common/autotest_common.sh@10 -- # set +x 01:31:04.846 [2024-12-09 05:25:47.191086] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:31:04.846 [2024-12-09 05:25:47.191164] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85327 ] 01:31:05.110 [2024-12-09 05:25:47.343483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:31:05.110 [2024-12-09 05:25:47.415519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:31:05.110 [2024-12-09 05:25:47.514830] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:31:05.680 05:25:48 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:31:05.680 05:25:48 keyring_file -- common/autotest_common.sh@868 -- # return 0 01:31:05.680 05:25:48 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 01:31:05.680 05:25:48 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:05.680 05:25:48 keyring_file -- common/autotest_common.sh@10 -- # set +x 01:31:05.680 [2024-12-09 05:25:48.111565] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:31:05.680 null0 01:31:05.938 [2024-12-09 05:25:48.143571] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:31:05.938 [2024-12-09 05:25:48.143852] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 01:31:05.938 05:25:48 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:05.938 05:25:48 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 01:31:05.938 05:25:48 keyring_file -- common/autotest_common.sh@652 -- # local es=0 01:31:05.938 05:25:48 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 01:31:05.938 05:25:48 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:31:05.938 05:25:48 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:31:05.938 05:25:48 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:31:05.938 05:25:48 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:31:05.938 05:25:48 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 01:31:05.938 05:25:48 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:05.938 05:25:48 keyring_file -- common/autotest_common.sh@10 -- # set +x 01:31:05.938 [2024-12-09 05:25:48.175506] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 01:31:05.938 request: 01:31:05.938 { 01:31:05.938 "nqn": "nqn.2016-06.io.spdk:cnode0", 01:31:05.938 "secure_channel": false, 01:31:05.938 "listen_address": { 01:31:05.938 "trtype": "tcp", 01:31:05.938 "traddr": "127.0.0.1", 01:31:05.938 "trsvcid": "4420" 01:31:05.938 }, 01:31:05.938 "method": "nvmf_subsystem_add_listener", 01:31:05.938 "req_id": 1 01:31:05.938 } 01:31:05.938 Got JSON-RPC error response 01:31:05.938 response: 01:31:05.938 { 01:31:05.938 "code": -32602, 01:31:05.938 "message": "Invalid parameters" 01:31:05.938 } 01:31:05.938 05:25:48 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:31:05.938 05:25:48 keyring_file -- common/autotest_common.sh@655 -- # es=1 01:31:05.938 05:25:48 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:31:05.938 05:25:48 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:31:05.938 05:25:48 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:31:05.938 05:25:48 keyring_file -- keyring/file.sh@47 -- # bperfpid=85344 01:31:05.938 05:25:48 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 01:31:05.938 05:25:48 keyring_file -- keyring/file.sh@49 -- # waitforlisten 85344 /var/tmp/bperf.sock 01:31:05.938 05:25:48 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 85344 ']' 01:31:05.938 05:25:48 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 01:31:05.938 05:25:48 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 01:31:05.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:31:05.938 05:25:48 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:31:05.938 05:25:48 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 01:31:05.938 05:25:48 keyring_file -- common/autotest_common.sh@10 -- # set +x 01:31:05.938 [2024-12-09 05:25:48.239333] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:31:05.938 [2024-12-09 05:25:48.239456] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85344 ] 01:31:06.195 [2024-12-09 05:25:48.394260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:31:06.195 [2024-12-09 05:25:48.449509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:31:06.195 [2024-12-09 05:25:48.491621] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:31:06.760 05:25:49 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:31:06.760 05:25:49 keyring_file -- common/autotest_common.sh@868 -- # return 0 01:31:06.760 05:25:49 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.oFWtQsZubp 01:31:06.760 05:25:49 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.oFWtQsZubp 01:31:07.019 05:25:49 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.DuYmdIl1O1 01:31:07.019 05:25:49 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.DuYmdIl1O1 01:31:07.277 05:25:49 keyring_file -- keyring/file.sh@52 -- # get_key key0 01:31:07.277 05:25:49 keyring_file -- keyring/file.sh@52 -- # jq -r .path 01:31:07.277 05:25:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:31:07.277 05:25:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:31:07.277 05:25:49 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:31:07.534 05:25:49 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.oFWtQsZubp == \/\t\m\p\/\t\m\p\.\o\F\W\t\Q\s\Z\u\b\p ]] 01:31:07.534 05:25:49 keyring_file -- keyring/file.sh@53 -- # jq -r .path 01:31:07.534 05:25:49 keyring_file -- keyring/file.sh@53 -- # get_key key1 01:31:07.534 05:25:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:31:07.534 05:25:49 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:31:07.534 05:25:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 01:31:07.793 05:25:50 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.DuYmdIl1O1 == \/\t\m\p\/\t\m\p\.\D\u\Y\m\d\I\l\1\O\1 ]] 01:31:07.793 05:25:50 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 01:31:07.793 05:25:50 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:31:07.793 05:25:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:31:07.793 05:25:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:31:07.793 05:25:50 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:31:07.793 05:25:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:31:07.793 05:25:50 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 01:31:08.051 05:25:50 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 01:31:08.051 05:25:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:31:08.051 05:25:50 keyring_file -- keyring/common.sh@12 -- # get_key key1 01:31:08.051 05:25:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:31:08.051 05:25:50 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:31:08.051 05:25:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 01:31:08.051 05:25:50 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 01:31:08.051 05:25:50 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:31:08.051 05:25:50 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:31:08.309 [2024-12-09 05:25:50.654941] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:31:08.309 nvme0n1 01:31:08.309 05:25:50 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 01:31:08.309 05:25:50 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:31:08.309 05:25:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:31:08.309 05:25:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:31:08.309 05:25:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:31:08.309 05:25:50 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:31:08.567 05:25:50 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 01:31:08.567 05:25:50 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 01:31:08.567 05:25:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:31:08.567 05:25:50 keyring_file -- keyring/common.sh@12 -- # get_key key1 01:31:08.567 05:25:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:31:08.567 05:25:50 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:31:08.567 05:25:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 01:31:08.824 05:25:51 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 01:31:08.824 05:25:51 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:31:09.082 Running I/O for 1 seconds... 01:31:10.015 15066.00 IOPS, 58.85 MiB/s 01:31:10.015 Latency(us) 01:31:10.015 [2024-12-09T05:25:52.471Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:31:10.015 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 01:31:10.015 nvme0n1 : 1.01 15109.89 59.02 0.00 0.00 8451.64 3777.62 13278.91 01:31:10.015 [2024-12-09T05:25:52.471Z] =================================================================================================================== 01:31:10.015 [2024-12-09T05:25:52.471Z] Total : 15109.89 59.02 0.00 0.00 8451.64 3777.62 13278.91 01:31:10.015 { 01:31:10.015 "results": [ 01:31:10.015 { 01:31:10.015 "job": "nvme0n1", 01:31:10.015 "core_mask": "0x2", 01:31:10.015 "workload": "randrw", 01:31:10.015 "percentage": 50, 01:31:10.015 "status": "finished", 01:31:10.015 "queue_depth": 128, 01:31:10.015 "io_size": 4096, 01:31:10.015 "runtime": 1.005699, 01:31:10.015 "iops": 15109.888744047672, 01:31:10.015 "mibps": 59.02300290643622, 01:31:10.015 "io_failed": 0, 01:31:10.015 "io_timeout": 0, 01:31:10.015 "avg_latency_us": 8451.63949970746, 01:31:10.015 "min_latency_us": 3777.62096069869, 01:31:10.015 "max_latency_us": 13278.910043668122 01:31:10.015 } 01:31:10.015 ], 01:31:10.015 "core_count": 1 01:31:10.015 } 01:31:10.015 05:25:52 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 01:31:10.015 05:25:52 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 01:31:10.273 05:25:52 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 01:31:10.273 05:25:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:31:10.273 05:25:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:31:10.273 05:25:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:31:10.273 05:25:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:31:10.273 05:25:52 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:31:10.530 05:25:52 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 01:31:10.530 05:25:52 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 01:31:10.530 05:25:52 keyring_file -- keyring/common.sh@12 -- # get_key key1 01:31:10.530 05:25:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:31:10.530 05:25:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:31:10.530 05:25:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 01:31:10.530 05:25:52 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:31:10.787 05:25:52 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 01:31:10.787 05:25:52 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 01:31:10.787 05:25:52 keyring_file -- common/autotest_common.sh@652 -- # local es=0 01:31:10.787 05:25:52 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 01:31:10.787 05:25:52 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 01:31:10.787 05:25:52 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:31:10.787 05:25:52 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 01:31:10.788 05:25:52 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:31:10.788 05:25:52 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 01:31:10.788 05:25:52 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 01:31:10.788 [2024-12-09 05:25:53.198694] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 01:31:10.788 [2024-12-09 05:25:53.198945] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13e75d0 (107): Transport endpoint is not connected 01:31:10.788 [2024-12-09 05:25:53.199930] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13e75d0 (9): Bad file descriptor 01:31:10.788 [2024-12-09 05:25:53.200927] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 01:31:10.788 [2024-12-09 05:25:53.200948] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 01:31:10.788 [2024-12-09 05:25:53.200955] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 01:31:10.788 [2024-12-09 05:25:53.200962] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 01:31:10.788 request: 01:31:10.788 { 01:31:10.788 "name": "nvme0", 01:31:10.788 "trtype": "tcp", 01:31:10.788 "traddr": "127.0.0.1", 01:31:10.788 "adrfam": "ipv4", 01:31:10.788 "trsvcid": "4420", 01:31:10.788 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:31:10.788 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:31:10.788 "prchk_reftag": false, 01:31:10.788 "prchk_guard": false, 01:31:10.788 "hdgst": false, 01:31:10.788 "ddgst": false, 01:31:10.788 "psk": "key1", 01:31:10.788 "allow_unrecognized_csi": false, 01:31:10.788 "method": "bdev_nvme_attach_controller", 01:31:10.788 "req_id": 1 01:31:10.788 } 01:31:10.788 Got JSON-RPC error response 01:31:10.788 response: 01:31:10.788 { 01:31:10.788 "code": -5, 01:31:10.788 "message": "Input/output error" 01:31:10.788 } 01:31:10.788 05:25:53 keyring_file -- common/autotest_common.sh@655 -- # es=1 01:31:10.788 05:25:53 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:31:10.788 05:25:53 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:31:10.788 05:25:53 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:31:10.788 05:25:53 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 01:31:10.788 05:25:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:31:10.788 05:25:53 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:31:10.788 05:25:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:31:10.788 05:25:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:31:10.788 05:25:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:31:11.044 05:25:53 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 01:31:11.044 05:25:53 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 01:31:11.044 05:25:53 keyring_file -- keyring/common.sh@12 -- # get_key key1 01:31:11.044 05:25:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:31:11.044 05:25:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:31:11.044 05:25:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:31:11.044 05:25:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 01:31:11.301 05:25:53 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 01:31:11.301 05:25:53 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 01:31:11.301 05:25:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 01:31:11.558 05:25:53 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 01:31:11.558 05:25:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 01:31:11.816 05:25:54 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 01:31:11.816 05:25:54 keyring_file -- keyring/file.sh@78 -- # jq length 01:31:11.816 05:25:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:31:12.074 05:25:54 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 01:31:12.074 05:25:54 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.oFWtQsZubp 01:31:12.074 05:25:54 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.oFWtQsZubp 01:31:12.074 05:25:54 keyring_file -- common/autotest_common.sh@652 -- # local es=0 01:31:12.074 05:25:54 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.oFWtQsZubp 01:31:12.074 05:25:54 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 01:31:12.074 05:25:54 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:31:12.074 05:25:54 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 01:31:12.074 05:25:54 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:31:12.074 05:25:54 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.oFWtQsZubp 01:31:12.074 05:25:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.oFWtQsZubp 01:31:12.332 [2024-12-09 05:25:54.548644] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.oFWtQsZubp': 0100660 01:31:12.332 [2024-12-09 05:25:54.548685] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 01:31:12.332 request: 01:31:12.332 { 01:31:12.332 "name": "key0", 01:31:12.332 "path": "/tmp/tmp.oFWtQsZubp", 01:31:12.332 "method": "keyring_file_add_key", 01:31:12.332 "req_id": 1 01:31:12.332 } 01:31:12.332 Got JSON-RPC error response 01:31:12.332 response: 01:31:12.332 { 01:31:12.332 "code": -1, 01:31:12.332 "message": "Operation not permitted" 01:31:12.332 } 01:31:12.332 05:25:54 keyring_file -- common/autotest_common.sh@655 -- # es=1 01:31:12.332 05:25:54 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:31:12.332 05:25:54 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:31:12.332 05:25:54 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:31:12.332 05:25:54 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.oFWtQsZubp 01:31:12.332 05:25:54 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.oFWtQsZubp 01:31:12.332 05:25:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.oFWtQsZubp 01:31:12.332 05:25:54 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.oFWtQsZubp 01:31:12.332 05:25:54 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 01:31:12.332 05:25:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:31:12.332 05:25:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:31:12.332 05:25:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:31:12.332 05:25:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:31:12.332 05:25:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:31:12.590 05:25:54 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 01:31:12.590 05:25:54 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:31:12.590 05:25:54 keyring_file -- common/autotest_common.sh@652 -- # local es=0 01:31:12.590 05:25:54 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:31:12.590 05:25:54 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 01:31:12.590 05:25:54 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:31:12.590 05:25:54 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 01:31:12.590 05:25:54 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:31:12.590 05:25:54 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:31:12.590 05:25:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:31:12.848 [2024-12-09 05:25:55.187570] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.oFWtQsZubp': No such file or directory 01:31:12.848 [2024-12-09 05:25:55.187616] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 01:31:12.848 [2024-12-09 05:25:55.187633] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 01:31:12.848 [2024-12-09 05:25:55.187656] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 01:31:12.848 [2024-12-09 05:25:55.187663] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 01:31:12.848 [2024-12-09 05:25:55.187670] bdev_nvme.c:6769:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 01:31:12.848 request: 01:31:12.848 { 01:31:12.848 "name": "nvme0", 01:31:12.848 "trtype": "tcp", 01:31:12.848 "traddr": "127.0.0.1", 01:31:12.848 "adrfam": "ipv4", 01:31:12.848 "trsvcid": "4420", 01:31:12.848 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:31:12.848 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:31:12.848 "prchk_reftag": false, 01:31:12.848 "prchk_guard": false, 01:31:12.848 "hdgst": false, 01:31:12.848 "ddgst": false, 01:31:12.849 "psk": "key0", 01:31:12.849 "allow_unrecognized_csi": false, 01:31:12.849 "method": "bdev_nvme_attach_controller", 01:31:12.849 "req_id": 1 01:31:12.849 } 01:31:12.849 Got JSON-RPC error response 01:31:12.849 response: 01:31:12.849 { 01:31:12.849 "code": -19, 01:31:12.849 "message": "No such device" 01:31:12.849 } 01:31:12.849 05:25:55 keyring_file -- common/autotest_common.sh@655 -- # es=1 01:31:12.849 05:25:55 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:31:12.849 05:25:55 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:31:12.849 05:25:55 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:31:12.849 05:25:55 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 01:31:12.849 05:25:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 01:31:13.111 05:25:55 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 01:31:13.111 05:25:55 keyring_file -- keyring/common.sh@15 -- # local name key digest path 01:31:13.111 05:25:55 keyring_file -- keyring/common.sh@17 -- # name=key0 01:31:13.111 05:25:55 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 01:31:13.111 05:25:55 keyring_file -- keyring/common.sh@17 -- # digest=0 01:31:13.111 05:25:55 keyring_file -- keyring/common.sh@18 -- # mktemp 01:31:13.111 05:25:55 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.goxKLIAO7f 01:31:13.111 05:25:55 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 01:31:13.111 05:25:55 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 01:31:13.111 05:25:55 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 01:31:13.111 05:25:55 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 01:31:13.111 05:25:55 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 01:31:13.111 05:25:55 keyring_file -- nvmf/common.sh@732 -- # digest=0 01:31:13.111 05:25:55 keyring_file -- nvmf/common.sh@733 -- # python - 01:31:13.111 05:25:55 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.goxKLIAO7f 01:31:13.111 05:25:55 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.goxKLIAO7f 01:31:13.111 05:25:55 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.goxKLIAO7f 01:31:13.111 05:25:55 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.goxKLIAO7f 01:31:13.111 05:25:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.goxKLIAO7f 01:31:13.369 05:25:55 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:31:13.369 05:25:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:31:13.628 nvme0n1 01:31:13.628 05:25:56 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 01:31:13.628 05:25:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:31:13.628 05:25:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:31:13.628 05:25:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:31:13.628 05:25:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:31:13.628 05:25:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:31:13.887 05:25:56 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 01:31:13.887 05:25:56 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 01:31:13.887 05:25:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 01:31:14.145 05:25:56 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 01:31:14.145 05:25:56 keyring_file -- keyring/file.sh@102 -- # get_key key0 01:31:14.145 05:25:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:31:14.145 05:25:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:31:14.145 05:25:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:31:14.402 05:25:56 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 01:31:14.402 05:25:56 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 01:31:14.402 05:25:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:31:14.402 05:25:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:31:14.402 05:25:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:31:14.402 05:25:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:31:14.402 05:25:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:31:14.660 05:25:56 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 01:31:14.660 05:25:56 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 01:31:14.660 05:25:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 01:31:14.660 05:25:57 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 01:31:14.660 05:25:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:31:14.660 05:25:57 keyring_file -- keyring/file.sh@105 -- # jq length 01:31:14.918 05:25:57 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 01:31:14.918 05:25:57 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.goxKLIAO7f 01:31:14.918 05:25:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.goxKLIAO7f 01:31:15.176 05:25:57 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.DuYmdIl1O1 01:31:15.176 05:25:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.DuYmdIl1O1 01:31:15.434 05:25:57 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:31:15.434 05:25:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:31:15.693 nvme0n1 01:31:15.693 05:25:58 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 01:31:15.693 05:25:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 01:31:15.952 05:25:58 keyring_file -- keyring/file.sh@113 -- # config='{ 01:31:15.952 "subsystems": [ 01:31:15.952 { 01:31:15.952 "subsystem": "keyring", 01:31:15.952 "config": [ 01:31:15.952 { 01:31:15.952 "method": "keyring_file_add_key", 01:31:15.952 "params": { 01:31:15.952 "name": "key0", 01:31:15.952 "path": "/tmp/tmp.goxKLIAO7f" 01:31:15.952 } 01:31:15.952 }, 01:31:15.952 { 01:31:15.952 "method": "keyring_file_add_key", 01:31:15.952 "params": { 01:31:15.952 "name": "key1", 01:31:15.952 "path": "/tmp/tmp.DuYmdIl1O1" 01:31:15.952 } 01:31:15.952 } 01:31:15.952 ] 01:31:15.952 }, 01:31:15.952 { 01:31:15.952 "subsystem": "iobuf", 01:31:15.952 "config": [ 01:31:15.952 { 01:31:15.952 "method": "iobuf_set_options", 01:31:15.952 "params": { 01:31:15.952 "small_pool_count": 8192, 01:31:15.952 "large_pool_count": 1024, 01:31:15.952 "small_bufsize": 8192, 01:31:15.952 "large_bufsize": 135168, 01:31:15.952 "enable_numa": false 01:31:15.952 } 01:31:15.952 } 01:31:15.952 ] 01:31:15.952 }, 01:31:15.952 { 01:31:15.952 "subsystem": "sock", 01:31:15.952 "config": [ 01:31:15.952 { 01:31:15.952 "method": "sock_set_default_impl", 01:31:15.952 "params": { 01:31:15.952 "impl_name": "uring" 01:31:15.952 } 01:31:15.952 }, 01:31:15.952 { 01:31:15.952 "method": "sock_impl_set_options", 01:31:15.952 "params": { 01:31:15.952 "impl_name": "ssl", 01:31:15.952 "recv_buf_size": 4096, 01:31:15.952 "send_buf_size": 4096, 01:31:15.952 "enable_recv_pipe": true, 01:31:15.952 "enable_quickack": false, 01:31:15.952 "enable_placement_id": 0, 01:31:15.952 "enable_zerocopy_send_server": true, 01:31:15.952 "enable_zerocopy_send_client": false, 01:31:15.952 "zerocopy_threshold": 0, 01:31:15.952 "tls_version": 0, 01:31:15.952 "enable_ktls": false 01:31:15.952 } 01:31:15.952 }, 01:31:15.952 { 01:31:15.952 "method": "sock_impl_set_options", 01:31:15.952 "params": { 01:31:15.952 "impl_name": "posix", 01:31:15.952 "recv_buf_size": 2097152, 01:31:15.952 "send_buf_size": 2097152, 01:31:15.952 "enable_recv_pipe": true, 01:31:15.952 "enable_quickack": false, 01:31:15.952 "enable_placement_id": 0, 01:31:15.952 "enable_zerocopy_send_server": true, 01:31:15.952 "enable_zerocopy_send_client": false, 01:31:15.952 "zerocopy_threshold": 0, 01:31:15.952 "tls_version": 0, 01:31:15.952 "enable_ktls": false 01:31:15.952 } 01:31:15.952 }, 01:31:15.952 { 01:31:15.952 "method": "sock_impl_set_options", 01:31:15.952 "params": { 01:31:15.952 "impl_name": "uring", 01:31:15.952 "recv_buf_size": 2097152, 01:31:15.952 "send_buf_size": 2097152, 01:31:15.952 "enable_recv_pipe": true, 01:31:15.952 "enable_quickack": false, 01:31:15.952 "enable_placement_id": 0, 01:31:15.952 "enable_zerocopy_send_server": false, 01:31:15.952 "enable_zerocopy_send_client": false, 01:31:15.952 "zerocopy_threshold": 0, 01:31:15.952 "tls_version": 0, 01:31:15.952 "enable_ktls": false 01:31:15.952 } 01:31:15.952 } 01:31:15.952 ] 01:31:15.952 }, 01:31:15.952 { 01:31:15.952 "subsystem": "vmd", 01:31:15.952 "config": [] 01:31:15.952 }, 01:31:15.952 { 01:31:15.952 "subsystem": "accel", 01:31:15.952 "config": [ 01:31:15.952 { 01:31:15.952 "method": "accel_set_options", 01:31:15.952 "params": { 01:31:15.952 "small_cache_size": 128, 01:31:15.952 "large_cache_size": 16, 01:31:15.952 "task_count": 2048, 01:31:15.952 "sequence_count": 2048, 01:31:15.952 "buf_count": 2048 01:31:15.952 } 01:31:15.952 } 01:31:15.952 ] 01:31:15.952 }, 01:31:15.952 { 01:31:15.952 "subsystem": "bdev", 01:31:15.952 "config": [ 01:31:15.952 { 01:31:15.952 "method": "bdev_set_options", 01:31:15.952 "params": { 01:31:15.952 "bdev_io_pool_size": 65535, 01:31:15.952 "bdev_io_cache_size": 256, 01:31:15.952 "bdev_auto_examine": true, 01:31:15.952 "iobuf_small_cache_size": 128, 01:31:15.952 "iobuf_large_cache_size": 16 01:31:15.952 } 01:31:15.952 }, 01:31:15.952 { 01:31:15.952 "method": "bdev_raid_set_options", 01:31:15.952 "params": { 01:31:15.952 "process_window_size_kb": 1024, 01:31:15.952 "process_max_bandwidth_mb_sec": 0 01:31:15.952 } 01:31:15.952 }, 01:31:15.952 { 01:31:15.952 "method": "bdev_iscsi_set_options", 01:31:15.952 "params": { 01:31:15.952 "timeout_sec": 30 01:31:15.952 } 01:31:15.952 }, 01:31:15.952 { 01:31:15.952 "method": "bdev_nvme_set_options", 01:31:15.952 "params": { 01:31:15.952 "action_on_timeout": "none", 01:31:15.952 "timeout_us": 0, 01:31:15.952 "timeout_admin_us": 0, 01:31:15.952 "keep_alive_timeout_ms": 10000, 01:31:15.952 "arbitration_burst": 0, 01:31:15.952 "low_priority_weight": 0, 01:31:15.952 "medium_priority_weight": 0, 01:31:15.952 "high_priority_weight": 0, 01:31:15.952 "nvme_adminq_poll_period_us": 10000, 01:31:15.952 "nvme_ioq_poll_period_us": 0, 01:31:15.952 "io_queue_requests": 512, 01:31:15.952 "delay_cmd_submit": true, 01:31:15.952 "transport_retry_count": 4, 01:31:15.952 "bdev_retry_count": 3, 01:31:15.952 "transport_ack_timeout": 0, 01:31:15.952 "ctrlr_loss_timeout_sec": 0, 01:31:15.952 "reconnect_delay_sec": 0, 01:31:15.952 "fast_io_fail_timeout_sec": 0, 01:31:15.952 "disable_auto_failback": false, 01:31:15.952 "generate_uuids": false, 01:31:15.952 "transport_tos": 0, 01:31:15.952 "nvme_error_stat": false, 01:31:15.952 "rdma_srq_size": 0, 01:31:15.953 "io_path_stat": false, 01:31:15.953 "allow_accel_sequence": false, 01:31:15.953 "rdma_max_cq_size": 0, 01:31:15.953 "rdma_cm_event_timeout_ms": 0, 01:31:15.953 "dhchap_digests": [ 01:31:15.953 "sha256", 01:31:15.953 "sha384", 01:31:15.953 "sha512" 01:31:15.953 ], 01:31:15.953 "dhchap_dhgroups": [ 01:31:15.953 "null", 01:31:15.953 "ffdhe2048", 01:31:15.953 "ffdhe3072", 01:31:15.953 "ffdhe4096", 01:31:15.953 "ffdhe6144", 01:31:15.953 "ffdhe8192" 01:31:15.953 ] 01:31:15.953 } 01:31:15.953 }, 01:31:15.953 { 01:31:15.953 "method": "bdev_nvme_attach_controller", 01:31:15.953 "params": { 01:31:15.953 "name": "nvme0", 01:31:15.953 "trtype": "TCP", 01:31:15.953 "adrfam": "IPv4", 01:31:15.953 "traddr": "127.0.0.1", 01:31:15.953 "trsvcid": "4420", 01:31:15.953 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:31:15.953 "prchk_reftag": false, 01:31:15.953 "prchk_guard": false, 01:31:15.953 "ctrlr_loss_timeout_sec": 0, 01:31:15.953 "reconnect_delay_sec": 0, 01:31:15.953 "fast_io_fail_timeout_sec": 0, 01:31:15.953 "psk": "key0", 01:31:15.953 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:31:15.953 "hdgst": false, 01:31:15.953 "ddgst": false, 01:31:15.953 "multipath": "multipath" 01:31:15.953 } 01:31:15.953 }, 01:31:15.953 { 01:31:15.953 "method": "bdev_nvme_set_hotplug", 01:31:15.953 "params": { 01:31:15.953 "period_us": 100000, 01:31:15.953 "enable": false 01:31:15.953 } 01:31:15.953 }, 01:31:15.953 { 01:31:15.953 "method": "bdev_wait_for_examine" 01:31:15.953 } 01:31:15.953 ] 01:31:15.953 }, 01:31:15.953 { 01:31:15.953 "subsystem": "nbd", 01:31:15.953 "config": [] 01:31:15.953 } 01:31:15.953 ] 01:31:15.953 }' 01:31:15.953 05:25:58 keyring_file -- keyring/file.sh@115 -- # killprocess 85344 01:31:15.953 05:25:58 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 85344 ']' 01:31:15.953 05:25:58 keyring_file -- common/autotest_common.sh@958 -- # kill -0 85344 01:31:15.953 05:25:58 keyring_file -- common/autotest_common.sh@959 -- # uname 01:31:15.953 05:25:58 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:31:15.953 05:25:58 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85344 01:31:15.953 05:25:58 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:31:15.953 05:25:58 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:31:15.953 05:25:58 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85344' 01:31:15.953 killing process with pid 85344 01:31:15.953 05:25:58 keyring_file -- common/autotest_common.sh@973 -- # kill 85344 01:31:15.953 Received shutdown signal, test time was about 1.000000 seconds 01:31:15.953 01:31:15.953 Latency(us) 01:31:15.953 [2024-12-09T05:25:58.409Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:31:15.953 [2024-12-09T05:25:58.409Z] =================================================================================================================== 01:31:15.953 [2024-12-09T05:25:58.409Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:31:15.953 05:25:58 keyring_file -- common/autotest_common.sh@978 -- # wait 85344 01:31:16.211 05:25:58 keyring_file -- keyring/file.sh@118 -- # bperfpid=85584 01:31:16.211 05:25:58 keyring_file -- keyring/file.sh@120 -- # waitforlisten 85584 /var/tmp/bperf.sock 01:31:16.212 05:25:58 keyring_file -- keyring/file.sh@116 -- # echo '{ 01:31:16.212 "subsystems": [ 01:31:16.212 { 01:31:16.212 "subsystem": "keyring", 01:31:16.212 "config": [ 01:31:16.212 { 01:31:16.212 "method": "keyring_file_add_key", 01:31:16.212 "params": { 01:31:16.212 "name": "key0", 01:31:16.212 "path": "/tmp/tmp.goxKLIAO7f" 01:31:16.212 } 01:31:16.212 }, 01:31:16.212 { 01:31:16.212 "method": "keyring_file_add_key", 01:31:16.212 "params": { 01:31:16.212 "name": "key1", 01:31:16.212 "path": "/tmp/tmp.DuYmdIl1O1" 01:31:16.212 } 01:31:16.212 } 01:31:16.212 ] 01:31:16.212 }, 01:31:16.212 { 01:31:16.212 "subsystem": "iobuf", 01:31:16.212 "config": [ 01:31:16.212 { 01:31:16.212 "method": "iobuf_set_options", 01:31:16.212 "params": { 01:31:16.212 "small_pool_count": 8192, 01:31:16.212 "large_pool_count": 1024, 01:31:16.212 "small_bufsize": 8192, 01:31:16.212 "large_bufsize": 135168, 01:31:16.212 "enable_numa": false 01:31:16.212 } 01:31:16.212 } 01:31:16.212 ] 01:31:16.212 }, 01:31:16.212 { 01:31:16.212 "subsystem": "sock", 01:31:16.212 "config": [ 01:31:16.212 { 01:31:16.212 "method": "sock_set_default_impl", 01:31:16.212 "params": { 01:31:16.212 "impl_name": "uring" 01:31:16.212 } 01:31:16.212 }, 01:31:16.212 { 01:31:16.212 "method": "sock_impl_set_options", 01:31:16.212 "params": { 01:31:16.212 "impl_name": "ssl", 01:31:16.212 "recv_buf_size": 4096, 01:31:16.212 "send_buf_size": 4096, 01:31:16.212 "enable_recv_pipe": true, 01:31:16.212 "enable_quickack": false, 01:31:16.212 "enable_placement_id": 0, 01:31:16.212 "enable_zerocopy_send_server": true, 01:31:16.212 "enable_zerocopy_send_client": false, 01:31:16.212 "zerocopy_threshold": 0, 01:31:16.212 "tls_version": 0, 01:31:16.212 "enable_ktls": false 01:31:16.212 } 01:31:16.212 }, 01:31:16.212 { 01:31:16.212 "method": "sock_impl_set_options", 01:31:16.212 "params": { 01:31:16.212 "impl_name": "posix", 01:31:16.212 "recv_buf_size": 2097152, 01:31:16.212 "send_buf_size": 2097152, 01:31:16.212 "enable_recv_pipe": true, 01:31:16.212 "enable_quickack": false, 01:31:16.212 "enable_placement_id": 0, 01:31:16.212 "enable_zerocopy_send_server": true, 01:31:16.212 "enable_zerocopy_send_client": false, 01:31:16.212 "zerocopy_threshold": 0, 01:31:16.212 "tls_version": 0, 01:31:16.212 "enable_ktls": false 01:31:16.212 } 01:31:16.212 }, 01:31:16.212 { 01:31:16.212 "method": "sock_impl_set_options", 01:31:16.212 "params": { 01:31:16.212 "impl_name": "uring", 01:31:16.212 "recv_buf_size": 2097152, 01:31:16.212 "send_buf_size": 2097152, 01:31:16.212 "enable_recv_pipe": true, 01:31:16.212 "enable_quickack": false, 01:31:16.212 "enable_placement_id": 0, 01:31:16.212 "enable_zerocopy_send_server": false, 01:31:16.212 "enable_zerocopy_send_client": false, 01:31:16.212 "zerocopy_threshold": 0, 01:31:16.212 "tls_version": 0, 01:31:16.212 "enable_ktls": false 01:31:16.212 } 01:31:16.212 } 01:31:16.212 ] 01:31:16.212 }, 01:31:16.212 { 01:31:16.212 "subsystem": "vmd", 01:31:16.212 "config": [] 01:31:16.212 }, 01:31:16.212 { 01:31:16.212 "subsystem": "accel", 01:31:16.212 "config": [ 01:31:16.212 { 01:31:16.212 "method": "accel_set_options", 01:31:16.212 "params": { 01:31:16.212 "small_cache_size": 128, 01:31:16.212 "large_cache_size": 16, 01:31:16.212 "task_count": 2048, 01:31:16.212 "sequence_count": 2048, 01:31:16.212 "buf_count": 2048 01:31:16.212 } 01:31:16.212 } 01:31:16.212 ] 01:31:16.212 }, 01:31:16.212 { 01:31:16.212 "subsystem": "bdev", 01:31:16.212 "config": [ 01:31:16.212 { 01:31:16.212 "method": "bdev_set_options", 01:31:16.212 "params": { 01:31:16.212 "bdev_io_pool_size": 65535, 01:31:16.212 "bdev_io_cache_size": 256, 01:31:16.212 "bdev_auto_examine": true, 01:31:16.212 "iobuf_small_cache_size": 128, 01:31:16.212 "iobuf_large_cache_size": 16 01:31:16.212 } 01:31:16.212 }, 01:31:16.212 { 01:31:16.212 "method": "bdev_raid_set_options", 01:31:16.212 "params": { 01:31:16.212 "process_window_size_kb": 1024, 01:31:16.212 "process_max_bandwidth_mb_sec": 0 01:31:16.212 } 01:31:16.212 }, 01:31:16.212 { 01:31:16.212 "method": "bdev_iscsi_set_options", 01:31:16.212 "params": { 01:31:16.212 "timeout_sec": 30 01:31:16.212 } 01:31:16.212 }, 01:31:16.212 { 01:31:16.212 "method": "bdev_nvme_set_options", 01:31:16.212 "params": { 01:31:16.212 "action_on_timeout": "none", 01:31:16.212 "timeout_us": 0, 01:31:16.212 "timeout_admin_us": 0, 01:31:16.212 "keep_alive_timeout_ms": 10000, 01:31:16.212 "arbitration_burst": 0, 01:31:16.212 "low_priority_weight": 0, 01:31:16.212 "medium_priority_weight": 0, 01:31:16.212 "high_priority_weight": 0, 01:31:16.212 "nvme_adminq_poll_period_us": 10000, 01:31:16.212 "nvme_ioq_poll_period_us": 0, 01:31:16.212 "io_queue_requests": 512, 01:31:16.212 "delay_cmd_submit": true, 01:31:16.212 "transport_retry_count": 4, 01:31:16.212 "bdev_retry_count": 3, 01:31:16.212 "transport_ack_timeout": 0, 01:31:16.212 "ctrlr_loss_timeout_sec": 0, 01:31:16.212 "reconnect_delay_sec": 0, 01:31:16.212 "fast_io_fail_timeout_sec": 0, 01:31:16.212 "disable_auto_failback": false, 01:31:16.212 "generate_uuids": false, 01:31:16.212 "transport_tos": 0, 01:31:16.212 "nvme_error_stat": false, 01:31:16.212 "rdma_srq_size": 0, 01:31:16.212 "io_path_stat": false, 01:31:16.213 "allow_accel_sequence": false, 01:31:16.213 "rdma_max_cq_size": 0, 01:31:16.213 "rdma_cm_event_timeout_ms": 0, 01:31:16.213 "dhchap_digests": [ 01:31:16.213 "sha256", 01:31:16.213 "sha384", 01:31:16.213 "sha512" 01:31:16.213 ], 01:31:16.213 "dhchap_dhgroups": [ 01:31:16.213 "null", 01:31:16.213 "ffdhe2048", 01:31:16.213 "ffdhe3072", 01:31:16.213 "ffdhe4096", 01:31:16.213 "ffdhe6144", 01:31:16.213 "ffdhe8192" 01:31:16.213 ] 01:31:16.213 } 01:31:16.213 }, 01:31:16.213 { 01:31:16.213 "method": "bdev_nvme_attach_controller", 01:31:16.213 "params": { 01:31:16.213 "name": "nvme0", 01:31:16.213 "trtype": "TCP", 01:31:16.213 "adrfam": "IPv4", 01:31:16.213 "traddr": "127.0.0.1", 01:31:16.213 "trsvcid": "4420", 01:31:16.213 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:31:16.213 "prchk_reftag": false, 01:31:16.213 "prchk_guard": false, 01:31:16.213 "ctrlr_loss_timeout_sec": 0, 01:31:16.213 "reconnect_delay_sec": 0, 01:31:16.213 "fast_io_fail_timeout_sec": 0, 01:31:16.213 "psk": "key0", 01:31:16.213 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:31:16.213 "hdgst": false, 01:31:16.213 "ddgst": false, 01:31:16.213 "multipath": "multipath" 01:31:16.213 } 01:31:16.213 }, 01:31:16.213 { 01:31:16.213 "method": "bdev_nvme_set_hotplug", 01:31:16.213 "params": { 01:31:16.213 "period_us": 100000, 01:31:16.213 "enable": false 01:31:16.213 } 01:31:16.213 }, 01:31:16.213 { 01:31:16.213 "method": "bdev_wait_for_examine" 01:31:16.213 } 01:31:16.213 ] 01:31:16.213 }, 01:31:16.213 { 01:31:16.213 "subsystem": "nbd", 01:31:16.213 "config": [] 01:31:16.213 } 01:31:16.213 ] 01:31:16.213 }' 01:31:16.213 05:25:58 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 01:31:16.213 05:25:58 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 85584 ']' 01:31:16.213 05:25:58 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 01:31:16.213 05:25:58 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 01:31:16.213 05:25:58 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:31:16.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:31:16.213 05:25:58 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 01:31:16.213 05:25:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 01:31:16.213 [2024-12-09 05:25:58.621710] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:31:16.213 [2024-12-09 05:25:58.621790] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85584 ] 01:31:16.472 [2024-12-09 05:25:58.772816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:31:16.472 [2024-12-09 05:25:58.826431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:31:16.730 [2024-12-09 05:25:58.949000] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:31:16.730 [2024-12-09 05:25:59.000419] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:31:17.296 05:25:59 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:31:17.296 05:25:59 keyring_file -- common/autotest_common.sh@868 -- # return 0 01:31:17.296 05:25:59 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 01:31:17.296 05:25:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:31:17.296 05:25:59 keyring_file -- keyring/file.sh@121 -- # jq length 01:31:17.554 05:25:59 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 01:31:17.554 05:25:59 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 01:31:17.554 05:25:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:31:17.554 05:25:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:31:17.554 05:25:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:31:17.554 05:25:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:31:17.554 05:25:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:31:17.812 05:26:00 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 01:31:17.812 05:26:00 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 01:31:17.812 05:26:00 keyring_file -- keyring/common.sh@12 -- # get_key key1 01:31:17.812 05:26:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:31:17.812 05:26:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:31:17.812 05:26:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 01:31:17.812 05:26:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:31:18.070 05:26:00 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 01:31:18.070 05:26:00 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 01:31:18.070 05:26:00 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 01:31:18.070 05:26:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 01:31:18.070 05:26:00 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 01:31:18.070 05:26:00 keyring_file -- keyring/file.sh@1 -- # cleanup 01:31:18.328 05:26:00 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.goxKLIAO7f /tmp/tmp.DuYmdIl1O1 01:31:18.328 05:26:00 keyring_file -- keyring/file.sh@20 -- # killprocess 85584 01:31:18.328 05:26:00 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 85584 ']' 01:31:18.328 05:26:00 keyring_file -- common/autotest_common.sh@958 -- # kill -0 85584 01:31:18.328 05:26:00 keyring_file -- common/autotest_common.sh@959 -- # uname 01:31:18.328 05:26:00 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:31:18.328 05:26:00 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85584 01:31:18.328 05:26:00 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:31:18.328 05:26:00 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:31:18.328 killing process with pid 85584 01:31:18.328 05:26:00 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85584' 01:31:18.328 05:26:00 keyring_file -- common/autotest_common.sh@973 -- # kill 85584 01:31:18.328 Received shutdown signal, test time was about 1.000000 seconds 01:31:18.328 01:31:18.328 Latency(us) 01:31:18.328 [2024-12-09T05:26:00.784Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:31:18.328 [2024-12-09T05:26:00.784Z] =================================================================================================================== 01:31:18.328 [2024-12-09T05:26:00.784Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 01:31:18.328 05:26:00 keyring_file -- common/autotest_common.sh@978 -- # wait 85584 01:31:18.328 05:26:00 keyring_file -- keyring/file.sh@21 -- # killprocess 85327 01:31:18.328 05:26:00 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 85327 ']' 01:31:18.328 05:26:00 keyring_file -- common/autotest_common.sh@958 -- # kill -0 85327 01:31:18.328 05:26:00 keyring_file -- common/autotest_common.sh@959 -- # uname 01:31:18.586 05:26:00 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:31:18.586 05:26:00 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85327 01:31:18.586 05:26:00 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:31:18.586 05:26:00 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:31:18.586 killing process with pid 85327 01:31:18.586 05:26:00 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85327' 01:31:18.586 05:26:00 keyring_file -- common/autotest_common.sh@973 -- # kill 85327 01:31:18.586 05:26:00 keyring_file -- common/autotest_common.sh@978 -- # wait 85327 01:31:19.152 ************************************ 01:31:19.152 END TEST keyring_file 01:31:19.152 ************************************ 01:31:19.152 01:31:19.152 real 0m14.629s 01:31:19.152 user 0m35.050s 01:31:19.152 sys 0m3.115s 01:31:19.152 05:26:01 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 01:31:19.153 05:26:01 keyring_file -- common/autotest_common.sh@10 -- # set +x 01:31:19.153 05:26:01 -- spdk/autotest.sh@293 -- # [[ y == y ]] 01:31:19.153 05:26:01 -- spdk/autotest.sh@294 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 01:31:19.153 05:26:01 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:31:19.153 05:26:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:31:19.153 05:26:01 -- common/autotest_common.sh@10 -- # set +x 01:31:19.153 ************************************ 01:31:19.153 START TEST keyring_linux 01:31:19.153 ************************************ 01:31:19.153 05:26:01 keyring_linux -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 01:31:19.153 Joined session keyring: 300372108 01:31:19.410 * Looking for test storage... 01:31:19.410 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 01:31:19.410 05:26:01 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:31:19.410 05:26:01 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 01:31:19.410 05:26:01 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:31:19.410 05:26:01 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:31:19.410 05:26:01 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:31:19.410 05:26:01 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 01:31:19.410 05:26:01 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 01:31:19.410 05:26:01 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 01:31:19.410 05:26:01 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 01:31:19.410 05:26:01 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 01:31:19.410 05:26:01 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 01:31:19.410 05:26:01 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 01:31:19.410 05:26:01 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 01:31:19.410 05:26:01 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 01:31:19.410 05:26:01 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:31:19.411 05:26:01 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 01:31:19.411 05:26:01 keyring_linux -- scripts/common.sh@345 -- # : 1 01:31:19.411 05:26:01 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 01:31:19.411 05:26:01 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:31:19.411 05:26:01 keyring_linux -- scripts/common.sh@365 -- # decimal 1 01:31:19.411 05:26:01 keyring_linux -- scripts/common.sh@353 -- # local d=1 01:31:19.411 05:26:01 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:31:19.411 05:26:01 keyring_linux -- scripts/common.sh@355 -- # echo 1 01:31:19.411 05:26:01 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 01:31:19.411 05:26:01 keyring_linux -- scripts/common.sh@366 -- # decimal 2 01:31:19.411 05:26:01 keyring_linux -- scripts/common.sh@353 -- # local d=2 01:31:19.411 05:26:01 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:31:19.411 05:26:01 keyring_linux -- scripts/common.sh@355 -- # echo 2 01:31:19.411 05:26:01 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 01:31:19.411 05:26:01 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:31:19.411 05:26:01 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:31:19.411 05:26:01 keyring_linux -- scripts/common.sh@368 -- # return 0 01:31:19.411 05:26:01 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:31:19.411 05:26:01 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:31:19.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:31:19.411 --rc genhtml_branch_coverage=1 01:31:19.411 --rc genhtml_function_coverage=1 01:31:19.411 --rc genhtml_legend=1 01:31:19.411 --rc geninfo_all_blocks=1 01:31:19.411 --rc geninfo_unexecuted_blocks=1 01:31:19.411 01:31:19.411 ' 01:31:19.411 05:26:01 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:31:19.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:31:19.411 --rc genhtml_branch_coverage=1 01:31:19.411 --rc genhtml_function_coverage=1 01:31:19.411 --rc genhtml_legend=1 01:31:19.411 --rc geninfo_all_blocks=1 01:31:19.411 --rc geninfo_unexecuted_blocks=1 01:31:19.411 01:31:19.411 ' 01:31:19.411 05:26:01 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:31:19.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:31:19.411 --rc genhtml_branch_coverage=1 01:31:19.411 --rc genhtml_function_coverage=1 01:31:19.411 --rc genhtml_legend=1 01:31:19.411 --rc geninfo_all_blocks=1 01:31:19.411 --rc geninfo_unexecuted_blocks=1 01:31:19.411 01:31:19.411 ' 01:31:19.411 05:26:01 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:31:19.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:31:19.411 --rc genhtml_branch_coverage=1 01:31:19.411 --rc genhtml_function_coverage=1 01:31:19.411 --rc genhtml_legend=1 01:31:19.411 --rc geninfo_all_blocks=1 01:31:19.411 --rc geninfo_unexecuted_blocks=1 01:31:19.411 01:31:19.411 ' 01:31:19.411 05:26:01 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 01:31:19.411 05:26:01 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:31:19.411 05:26:01 keyring_linux -- nvmf/common.sh@7 -- # uname -s 01:31:19.411 05:26:01 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:31:19.411 05:26:01 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:31:19.411 05:26:01 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:31:19.411 05:26:01 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:31:19.411 05:26:01 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:31:19.411 05:26:01 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:31:19.411 05:26:01 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:31:19.411 05:26:01 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:31:19.411 05:26:01 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:31:19.411 05:26:01 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:31:19.411 05:26:01 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:31:19.411 05:26:01 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=0567fff2-ddaf-4a4f-877d-a2600d7e662b 01:31:19.411 05:26:01 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:31:19.411 05:26:01 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:31:19.411 05:26:01 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:31:19.411 05:26:01 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:31:19.411 05:26:01 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:31:19.411 05:26:01 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 01:31:19.411 05:26:01 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:31:19.411 05:26:01 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:31:19.411 05:26:01 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:31:19.411 05:26:01 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:31:19.411 05:26:01 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:31:19.411 05:26:01 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:31:19.411 05:26:01 keyring_linux -- paths/export.sh@5 -- # export PATH 01:31:19.411 05:26:01 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:31:19.411 05:26:01 keyring_linux -- nvmf/common.sh@51 -- # : 0 01:31:19.411 05:26:01 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:31:19.411 05:26:01 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:31:19.411 05:26:01 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:31:19.411 05:26:01 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:31:19.411 05:26:01 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:31:19.411 05:26:01 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:31:19.411 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:31:19.411 05:26:01 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:31:19.411 05:26:01 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:31:19.411 05:26:01 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 01:31:19.411 05:26:01 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 01:31:19.411 05:26:01 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 01:31:19.411 05:26:01 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 01:31:19.411 05:26:01 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 01:31:19.411 05:26:01 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 01:31:19.411 05:26:01 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 01:31:19.411 05:26:01 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 01:31:19.411 05:26:01 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 01:31:19.411 05:26:01 keyring_linux -- keyring/common.sh@17 -- # name=key0 01:31:19.411 05:26:01 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 01:31:19.411 05:26:01 keyring_linux -- keyring/common.sh@17 -- # digest=0 01:31:19.411 05:26:01 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 01:31:19.411 05:26:01 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 01:31:19.411 05:26:01 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 01:31:19.411 05:26:01 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 01:31:19.411 05:26:01 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 01:31:19.411 05:26:01 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 01:31:19.411 05:26:01 keyring_linux -- nvmf/common.sh@732 -- # digest=0 01:31:19.411 05:26:01 keyring_linux -- nvmf/common.sh@733 -- # python - 01:31:19.411 05:26:01 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 01:31:19.411 05:26:01 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 01:31:19.411 /tmp/:spdk-test:key0 01:31:19.411 05:26:01 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 01:31:19.411 05:26:01 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 01:31:19.411 05:26:01 keyring_linux -- keyring/common.sh@17 -- # name=key1 01:31:19.411 05:26:01 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 01:31:19.411 05:26:01 keyring_linux -- keyring/common.sh@17 -- # digest=0 01:31:19.411 05:26:01 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 01:31:19.411 05:26:01 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 01:31:19.411 05:26:01 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 01:31:19.411 05:26:01 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 01:31:19.411 05:26:01 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 01:31:19.411 05:26:01 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 01:31:19.412 05:26:01 keyring_linux -- nvmf/common.sh@732 -- # digest=0 01:31:19.412 05:26:01 keyring_linux -- nvmf/common.sh@733 -- # python - 01:31:19.670 05:26:01 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 01:31:19.670 05:26:01 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 01:31:19.670 /tmp/:spdk-test:key1 01:31:19.670 05:26:01 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=85712 01:31:19.670 05:26:01 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:31:19.670 05:26:01 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 85712 01:31:19.670 05:26:01 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 85712 ']' 01:31:19.670 05:26:01 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:31:19.670 05:26:01 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 01:31:19.670 05:26:01 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:31:19.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:31:19.670 05:26:01 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 01:31:19.670 05:26:01 keyring_linux -- common/autotest_common.sh@10 -- # set +x 01:31:19.670 [2024-12-09 05:26:01.969347] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:31:19.670 [2024-12-09 05:26:01.969494] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85712 ] 01:31:19.670 [2024-12-09 05:26:02.120511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:31:19.929 [2024-12-09 05:26:02.194792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:31:19.929 [2024-12-09 05:26:02.295866] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:31:20.496 05:26:02 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:31:20.496 05:26:02 keyring_linux -- common/autotest_common.sh@868 -- # return 0 01:31:20.496 05:26:02 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 01:31:20.496 05:26:02 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 01:31:20.496 05:26:02 keyring_linux -- common/autotest_common.sh@10 -- # set +x 01:31:20.496 [2024-12-09 05:26:02.825265] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:31:20.496 null0 01:31:20.496 [2024-12-09 05:26:02.857171] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:31:20.496 [2024-12-09 05:26:02.857370] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 01:31:20.496 05:26:02 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:31:20.496 05:26:02 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 01:31:20.496 576585084 01:31:20.496 05:26:02 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 01:31:20.496 428204821 01:31:20.496 05:26:02 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=85730 01:31:20.496 05:26:02 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 01:31:20.496 05:26:02 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 85730 /var/tmp/bperf.sock 01:31:20.496 05:26:02 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 85730 ']' 01:31:20.496 05:26:02 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 01:31:20.496 05:26:02 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 01:31:20.496 05:26:02 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:31:20.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:31:20.496 05:26:02 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 01:31:20.496 05:26:02 keyring_linux -- common/autotest_common.sh@10 -- # set +x 01:31:20.496 [2024-12-09 05:26:02.940693] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 01:31:20.496 [2024-12-09 05:26:02.940822] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85730 ] 01:31:20.754 [2024-12-09 05:26:03.092136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:31:20.754 [2024-12-09 05:26:03.140155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:31:21.686 05:26:03 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:31:21.686 05:26:03 keyring_linux -- common/autotest_common.sh@868 -- # return 0 01:31:21.686 05:26:03 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 01:31:21.686 05:26:03 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 01:31:21.686 05:26:03 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 01:31:21.687 05:26:03 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 01:31:21.944 [2024-12-09 05:26:04.226995] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 01:31:21.944 05:26:04 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 01:31:21.944 05:26:04 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 01:31:22.203 [2024-12-09 05:26:04.496591] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:31:22.203 nvme0n1 01:31:22.203 05:26:04 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 01:31:22.203 05:26:04 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 01:31:22.203 05:26:04 keyring_linux -- keyring/linux.sh@20 -- # local sn 01:31:22.203 05:26:04 keyring_linux -- keyring/linux.sh@22 -- # jq length 01:31:22.203 05:26:04 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 01:31:22.203 05:26:04 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:31:22.461 05:26:04 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 01:31:22.461 05:26:04 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 01:31:22.461 05:26:04 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 01:31:22.461 05:26:04 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 01:31:22.461 05:26:04 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:31:22.461 05:26:04 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:31:22.461 05:26:04 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 01:31:22.719 05:26:05 keyring_linux -- keyring/linux.sh@25 -- # sn=576585084 01:31:22.719 05:26:05 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 01:31:22.719 05:26:05 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 01:31:22.719 05:26:05 keyring_linux -- keyring/linux.sh@26 -- # [[ 576585084 == \5\7\6\5\8\5\0\8\4 ]] 01:31:22.719 05:26:05 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 576585084 01:31:22.719 05:26:05 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 01:31:22.719 05:26:05 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:31:22.719 Running I/O for 1 seconds... 01:31:24.130 15301.00 IOPS, 59.77 MiB/s 01:31:24.130 Latency(us) 01:31:24.130 [2024-12-09T05:26:06.586Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:31:24.130 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 01:31:24.130 nvme0n1 : 1.01 15298.41 59.76 0.00 0.00 8328.77 2589.96 10245.37 01:31:24.130 [2024-12-09T05:26:06.586Z] =================================================================================================================== 01:31:24.130 [2024-12-09T05:26:06.586Z] Total : 15298.41 59.76 0.00 0.00 8328.77 2589.96 10245.37 01:31:24.130 { 01:31:24.130 "results": [ 01:31:24.130 { 01:31:24.130 "job": "nvme0n1", 01:31:24.130 "core_mask": "0x2", 01:31:24.130 "workload": "randread", 01:31:24.130 "status": "finished", 01:31:24.130 "queue_depth": 128, 01:31:24.130 "io_size": 4096, 01:31:24.130 "runtime": 1.008536, 01:31:24.130 "iops": 15298.412748776444, 01:31:24.130 "mibps": 59.75942479990798, 01:31:24.130 "io_failed": 0, 01:31:24.130 "io_timeout": 0, 01:31:24.130 "avg_latency_us": 8328.767622927504, 01:31:24.130 "min_latency_us": 2589.9598253275108, 01:31:24.130 "max_latency_us": 10245.365938864628 01:31:24.130 } 01:31:24.130 ], 01:31:24.130 "core_count": 1 01:31:24.130 } 01:31:24.130 05:26:06 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 01:31:24.130 05:26:06 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 01:31:24.130 05:26:06 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 01:31:24.130 05:26:06 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 01:31:24.130 05:26:06 keyring_linux -- keyring/linux.sh@20 -- # local sn 01:31:24.130 05:26:06 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 01:31:24.130 05:26:06 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:31:24.130 05:26:06 keyring_linux -- keyring/linux.sh@22 -- # jq length 01:31:24.388 05:26:06 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 01:31:24.389 05:26:06 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 01:31:24.389 05:26:06 keyring_linux -- keyring/linux.sh@23 -- # return 01:31:24.389 05:26:06 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 01:31:24.389 05:26:06 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 01:31:24.389 05:26:06 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 01:31:24.389 05:26:06 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 01:31:24.389 05:26:06 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:31:24.389 05:26:06 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 01:31:24.389 05:26:06 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:31:24.389 05:26:06 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 01:31:24.389 05:26:06 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 01:31:24.389 [2024-12-09 05:26:06.822694] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 01:31:24.389 [2024-12-09 05:26:06.823235] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb95d0 (107): Transport endpoint is not connected 01:31:24.389 [2024-12-09 05:26:06.824221] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb95d0 (9): Bad file descriptor 01:31:24.389 [2024-12-09 05:26:06.825217] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 01:31:24.389 [2024-12-09 05:26:06.825240] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 01:31:24.389 [2024-12-09 05:26:06.825247] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 01:31:24.389 [2024-12-09 05:26:06.825254] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 01:31:24.389 request: 01:31:24.389 { 01:31:24.389 "name": "nvme0", 01:31:24.389 "trtype": "tcp", 01:31:24.389 "traddr": "127.0.0.1", 01:31:24.389 "adrfam": "ipv4", 01:31:24.389 "trsvcid": "4420", 01:31:24.389 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:31:24.389 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:31:24.389 "prchk_reftag": false, 01:31:24.389 "prchk_guard": false, 01:31:24.389 "hdgst": false, 01:31:24.389 "ddgst": false, 01:31:24.389 "psk": ":spdk-test:key1", 01:31:24.389 "allow_unrecognized_csi": false, 01:31:24.389 "method": "bdev_nvme_attach_controller", 01:31:24.389 "req_id": 1 01:31:24.389 } 01:31:24.389 Got JSON-RPC error response 01:31:24.389 response: 01:31:24.389 { 01:31:24.389 "code": -5, 01:31:24.389 "message": "Input/output error" 01:31:24.389 } 01:31:24.389 05:26:06 keyring_linux -- common/autotest_common.sh@655 -- # es=1 01:31:24.389 05:26:06 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:31:24.389 05:26:06 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:31:24.389 05:26:06 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:31:24.389 05:26:06 keyring_linux -- keyring/linux.sh@1 -- # cleanup 01:31:24.389 05:26:06 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 01:31:24.389 05:26:06 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 01:31:24.389 05:26:06 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 01:31:24.647 05:26:06 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 01:31:24.647 05:26:06 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 01:31:24.648 05:26:06 keyring_linux -- keyring/linux.sh@33 -- # sn=576585084 01:31:24.648 05:26:06 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 576585084 01:31:24.648 1 links removed 01:31:24.648 05:26:06 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 01:31:24.648 05:26:06 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 01:31:24.648 05:26:06 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 01:31:24.648 05:26:06 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 01:31:24.648 05:26:06 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 01:31:24.648 05:26:06 keyring_linux -- keyring/linux.sh@33 -- # sn=428204821 01:31:24.648 05:26:06 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 428204821 01:31:24.648 1 links removed 01:31:24.648 05:26:06 keyring_linux -- keyring/linux.sh@41 -- # killprocess 85730 01:31:24.648 05:26:06 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 85730 ']' 01:31:24.648 05:26:06 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 85730 01:31:24.648 05:26:06 keyring_linux -- common/autotest_common.sh@959 -- # uname 01:31:24.648 05:26:06 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:31:24.648 05:26:06 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85730 01:31:24.648 05:26:06 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:31:24.648 killing process with pid 85730 01:31:24.648 Received shutdown signal, test time was about 1.000000 seconds 01:31:24.648 01:31:24.648 Latency(us) 01:31:24.648 [2024-12-09T05:26:07.104Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:31:24.648 [2024-12-09T05:26:07.104Z] =================================================================================================================== 01:31:24.648 [2024-12-09T05:26:07.104Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:31:24.648 05:26:06 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:31:24.648 05:26:06 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85730' 01:31:24.648 05:26:06 keyring_linux -- common/autotest_common.sh@973 -- # kill 85730 01:31:24.648 05:26:06 keyring_linux -- common/autotest_common.sh@978 -- # wait 85730 01:31:24.648 05:26:07 keyring_linux -- keyring/linux.sh@42 -- # killprocess 85712 01:31:24.648 05:26:07 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 85712 ']' 01:31:24.648 05:26:07 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 85712 01:31:24.648 05:26:07 keyring_linux -- common/autotest_common.sh@959 -- # uname 01:31:24.906 05:26:07 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:31:24.906 05:26:07 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85712 01:31:24.906 05:26:07 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:31:24.906 05:26:07 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:31:24.906 05:26:07 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85712' 01:31:24.906 killing process with pid 85712 01:31:24.906 05:26:07 keyring_linux -- common/autotest_common.sh@973 -- # kill 85712 01:31:24.906 05:26:07 keyring_linux -- common/autotest_common.sh@978 -- # wait 85712 01:31:25.473 01:31:25.473 real 0m6.250s 01:31:25.473 user 0m11.055s 01:31:25.473 sys 0m1.825s 01:31:25.473 05:26:07 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 01:31:25.473 ************************************ 01:31:25.473 END TEST keyring_linux 01:31:25.473 ************************************ 01:31:25.473 05:26:07 keyring_linux -- common/autotest_common.sh@10 -- # set +x 01:31:25.473 05:26:07 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 01:31:25.473 05:26:07 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 01:31:25.473 05:26:07 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 01:31:25.473 05:26:07 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 01:31:25.473 05:26:07 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 01:31:25.473 05:26:07 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 01:31:25.473 05:26:07 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 01:31:25.473 05:26:07 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 01:31:25.473 05:26:07 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 01:31:25.473 05:26:07 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 01:31:25.473 05:26:07 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 01:31:25.473 05:26:07 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 01:31:25.473 05:26:07 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 01:31:25.473 05:26:07 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 01:31:25.473 05:26:07 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 01:31:25.473 05:26:07 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 01:31:25.473 05:26:07 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 01:31:25.473 05:26:07 -- common/autotest_common.sh@726 -- # xtrace_disable 01:31:25.473 05:26:07 -- common/autotest_common.sh@10 -- # set +x 01:31:25.473 05:26:07 -- spdk/autotest.sh@388 -- # autotest_cleanup 01:31:25.473 05:26:07 -- common/autotest_common.sh@1396 -- # local autotest_es=0 01:31:25.473 05:26:07 -- common/autotest_common.sh@1397 -- # xtrace_disable 01:31:25.473 05:26:07 -- common/autotest_common.sh@10 -- # set +x 01:31:28.010 INFO: APP EXITING 01:31:28.010 INFO: killing all VMs 01:31:28.010 INFO: killing vhost app 01:31:28.010 INFO: EXIT DONE 01:31:28.575 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:31:28.575 0000:00:11.0 (1b36 0010): Already using the nvme driver 01:31:28.575 0000:00:10.0 (1b36 0010): Already using the nvme driver 01:31:29.511 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:31:29.511 Cleaning 01:31:29.511 Removing: /var/run/dpdk/spdk0/config 01:31:29.511 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 01:31:29.511 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 01:31:29.511 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 01:31:29.511 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 01:31:29.511 Removing: /var/run/dpdk/spdk0/fbarray_memzone 01:31:29.511 Removing: /var/run/dpdk/spdk0/hugepage_info 01:31:29.511 Removing: /var/run/dpdk/spdk1/config 01:31:29.511 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 01:31:29.511 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 01:31:29.511 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 01:31:29.511 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 01:31:29.511 Removing: /var/run/dpdk/spdk1/fbarray_memzone 01:31:29.511 Removing: /var/run/dpdk/spdk1/hugepage_info 01:31:29.511 Removing: /var/run/dpdk/spdk2/config 01:31:29.511 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 01:31:29.511 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 01:31:29.512 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 01:31:29.512 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 01:31:29.512 Removing: /var/run/dpdk/spdk2/fbarray_memzone 01:31:29.512 Removing: /var/run/dpdk/spdk2/hugepage_info 01:31:29.512 Removing: /var/run/dpdk/spdk3/config 01:31:29.512 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 01:31:29.512 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 01:31:29.512 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 01:31:29.512 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 01:31:29.512 Removing: /var/run/dpdk/spdk3/fbarray_memzone 01:31:29.512 Removing: /var/run/dpdk/spdk3/hugepage_info 01:31:29.512 Removing: /var/run/dpdk/spdk4/config 01:31:29.512 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 01:31:29.512 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 01:31:29.512 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 01:31:29.512 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 01:31:29.512 Removing: /var/run/dpdk/spdk4/fbarray_memzone 01:31:29.772 Removing: /var/run/dpdk/spdk4/hugepage_info 01:31:29.772 Removing: /dev/shm/nvmf_trace.0 01:31:29.772 Removing: /dev/shm/spdk_tgt_trace.pid56878 01:31:29.772 Removing: /var/run/dpdk/spdk0 01:31:29.772 Removing: /var/run/dpdk/spdk1 01:31:29.772 Removing: /var/run/dpdk/spdk2 01:31:29.772 Removing: /var/run/dpdk/spdk3 01:31:29.772 Removing: /var/run/dpdk/spdk4 01:31:29.772 Removing: /var/run/dpdk/spdk_pid56725 01:31:29.772 Removing: /var/run/dpdk/spdk_pid56878 01:31:29.772 Removing: /var/run/dpdk/spdk_pid57083 01:31:29.772 Removing: /var/run/dpdk/spdk_pid57165 01:31:29.772 Removing: /var/run/dpdk/spdk_pid57193 01:31:29.772 Removing: /var/run/dpdk/spdk_pid57298 01:31:29.772 Removing: /var/run/dpdk/spdk_pid57315 01:31:29.772 Removing: /var/run/dpdk/spdk_pid57460 01:31:29.772 Removing: /var/run/dpdk/spdk_pid57645 01:31:29.772 Removing: /var/run/dpdk/spdk_pid57794 01:31:29.772 Removing: /var/run/dpdk/spdk_pid57872 01:31:29.772 Removing: /var/run/dpdk/spdk_pid57945 01:31:29.772 Removing: /var/run/dpdk/spdk_pid58044 01:31:29.772 Removing: /var/run/dpdk/spdk_pid58129 01:31:29.772 Removing: /var/run/dpdk/spdk_pid58162 01:31:29.772 Removing: /var/run/dpdk/spdk_pid58198 01:31:29.772 Removing: /var/run/dpdk/spdk_pid58267 01:31:29.772 Removing: /var/run/dpdk/spdk_pid58356 01:31:29.772 Removing: /var/run/dpdk/spdk_pid58780 01:31:29.772 Removing: /var/run/dpdk/spdk_pid58832 01:31:29.772 Removing: /var/run/dpdk/spdk_pid58883 01:31:29.772 Removing: /var/run/dpdk/spdk_pid58899 01:31:29.772 Removing: /var/run/dpdk/spdk_pid58967 01:31:29.772 Removing: /var/run/dpdk/spdk_pid58983 01:31:29.772 Removing: /var/run/dpdk/spdk_pid59043 01:31:29.772 Removing: /var/run/dpdk/spdk_pid59055 01:31:29.772 Removing: /var/run/dpdk/spdk_pid59106 01:31:29.772 Removing: /var/run/dpdk/spdk_pid59124 01:31:29.772 Removing: /var/run/dpdk/spdk_pid59164 01:31:29.772 Removing: /var/run/dpdk/spdk_pid59182 01:31:29.772 Removing: /var/run/dpdk/spdk_pid59318 01:31:29.772 Removing: /var/run/dpdk/spdk_pid59354 01:31:29.772 Removing: /var/run/dpdk/spdk_pid59436 01:31:29.772 Removing: /var/run/dpdk/spdk_pid59776 01:31:29.772 Removing: /var/run/dpdk/spdk_pid59793 01:31:29.772 Removing: /var/run/dpdk/spdk_pid59824 01:31:29.772 Removing: /var/run/dpdk/spdk_pid59838 01:31:29.772 Removing: /var/run/dpdk/spdk_pid59853 01:31:29.772 Removing: /var/run/dpdk/spdk_pid59872 01:31:29.772 Removing: /var/run/dpdk/spdk_pid59886 01:31:29.772 Removing: /var/run/dpdk/spdk_pid59901 01:31:29.772 Removing: /var/run/dpdk/spdk_pid59920 01:31:29.772 Removing: /var/run/dpdk/spdk_pid59938 01:31:29.772 Removing: /var/run/dpdk/spdk_pid59955 01:31:29.772 Removing: /var/run/dpdk/spdk_pid59974 01:31:29.772 Removing: /var/run/dpdk/spdk_pid59987 01:31:29.772 Removing: /var/run/dpdk/spdk_pid60003 01:31:29.772 Removing: /var/run/dpdk/spdk_pid60022 01:31:29.772 Removing: /var/run/dpdk/spdk_pid60035 01:31:29.772 Removing: /var/run/dpdk/spdk_pid60051 01:31:29.772 Removing: /var/run/dpdk/spdk_pid60070 01:31:29.772 Removing: /var/run/dpdk/spdk_pid60083 01:31:29.772 Removing: /var/run/dpdk/spdk_pid60099 01:31:29.772 Removing: /var/run/dpdk/spdk_pid60135 01:31:29.772 Removing: /var/run/dpdk/spdk_pid60143 01:31:29.772 Removing: /var/run/dpdk/spdk_pid60178 01:31:29.772 Removing: /var/run/dpdk/spdk_pid60250 01:31:29.772 Removing: /var/run/dpdk/spdk_pid60275 01:31:29.772 Removing: /var/run/dpdk/spdk_pid60290 01:31:29.772 Removing: /var/run/dpdk/spdk_pid60317 01:31:29.772 Removing: /var/run/dpdk/spdk_pid60328 01:31:29.772 Removing: /var/run/dpdk/spdk_pid60331 01:31:29.772 Removing: /var/run/dpdk/spdk_pid60378 01:31:29.772 Removing: /var/run/dpdk/spdk_pid60390 01:31:30.030 Removing: /var/run/dpdk/spdk_pid60420 01:31:30.030 Removing: /var/run/dpdk/spdk_pid60435 01:31:30.030 Removing: /var/run/dpdk/spdk_pid60440 01:31:30.030 Removing: /var/run/dpdk/spdk_pid60455 01:31:30.030 Removing: /var/run/dpdk/spdk_pid60459 01:31:30.030 Removing: /var/run/dpdk/spdk_pid60473 01:31:30.030 Removing: /var/run/dpdk/spdk_pid60478 01:31:30.030 Removing: /var/run/dpdk/spdk_pid60492 01:31:30.030 Removing: /var/run/dpdk/spdk_pid60516 01:31:30.030 Removing: /var/run/dpdk/spdk_pid60548 01:31:30.030 Removing: /var/run/dpdk/spdk_pid60552 01:31:30.030 Removing: /var/run/dpdk/spdk_pid60586 01:31:30.030 Removing: /var/run/dpdk/spdk_pid60590 01:31:30.030 Removing: /var/run/dpdk/spdk_pid60603 01:31:30.030 Removing: /var/run/dpdk/spdk_pid60640 01:31:30.030 Removing: /var/run/dpdk/spdk_pid60657 01:31:30.030 Removing: /var/run/dpdk/spdk_pid60686 01:31:30.030 Removing: /var/run/dpdk/spdk_pid60693 01:31:30.031 Removing: /var/run/dpdk/spdk_pid60701 01:31:30.031 Removing: /var/run/dpdk/spdk_pid60708 01:31:30.031 Removing: /var/run/dpdk/spdk_pid60716 01:31:30.031 Removing: /var/run/dpdk/spdk_pid60723 01:31:30.031 Removing: /var/run/dpdk/spdk_pid60731 01:31:30.031 Removing: /var/run/dpdk/spdk_pid60738 01:31:30.031 Removing: /var/run/dpdk/spdk_pid60820 01:31:30.031 Removing: /var/run/dpdk/spdk_pid60862 01:31:30.031 Removing: /var/run/dpdk/spdk_pid60975 01:31:30.031 Removing: /var/run/dpdk/spdk_pid61005 01:31:30.031 Removing: /var/run/dpdk/spdk_pid61049 01:31:30.031 Removing: /var/run/dpdk/spdk_pid61063 01:31:30.031 Removing: /var/run/dpdk/spdk_pid61080 01:31:30.031 Removing: /var/run/dpdk/spdk_pid61100 01:31:30.031 Removing: /var/run/dpdk/spdk_pid61136 01:31:30.031 Removing: /var/run/dpdk/spdk_pid61147 01:31:30.031 Removing: /var/run/dpdk/spdk_pid61225 01:31:30.031 Removing: /var/run/dpdk/spdk_pid61246 01:31:30.031 Removing: /var/run/dpdk/spdk_pid61285 01:31:30.031 Removing: /var/run/dpdk/spdk_pid61358 01:31:30.031 Removing: /var/run/dpdk/spdk_pid61404 01:31:30.031 Removing: /var/run/dpdk/spdk_pid61428 01:31:30.031 Removing: /var/run/dpdk/spdk_pid61529 01:31:30.031 Removing: /var/run/dpdk/spdk_pid61577 01:31:30.031 Removing: /var/run/dpdk/spdk_pid61605 01:31:30.031 Removing: /var/run/dpdk/spdk_pid61836 01:31:30.031 Removing: /var/run/dpdk/spdk_pid61934 01:31:30.031 Removing: /var/run/dpdk/spdk_pid61962 01:31:30.031 Removing: /var/run/dpdk/spdk_pid61992 01:31:30.031 Removing: /var/run/dpdk/spdk_pid62025 01:31:30.031 Removing: /var/run/dpdk/spdk_pid62063 01:31:30.031 Removing: /var/run/dpdk/spdk_pid62098 01:31:30.031 Removing: /var/run/dpdk/spdk_pid62129 01:31:30.031 Removing: /var/run/dpdk/spdk_pid62537 01:31:30.031 Removing: /var/run/dpdk/spdk_pid62575 01:31:30.031 Removing: /var/run/dpdk/spdk_pid62914 01:31:30.031 Removing: /var/run/dpdk/spdk_pid63373 01:31:30.031 Removing: /var/run/dpdk/spdk_pid63635 01:31:30.031 Removing: /var/run/dpdk/spdk_pid64527 01:31:30.031 Removing: /var/run/dpdk/spdk_pid65451 01:31:30.031 Removing: /var/run/dpdk/spdk_pid65574 01:31:30.031 Removing: /var/run/dpdk/spdk_pid65636 01:31:30.031 Removing: /var/run/dpdk/spdk_pid67057 01:31:30.031 Removing: /var/run/dpdk/spdk_pid67382 01:31:30.031 Removing: /var/run/dpdk/spdk_pid70817 01:31:30.031 Removing: /var/run/dpdk/spdk_pid71165 01:31:30.031 Removing: /var/run/dpdk/spdk_pid71278 01:31:30.031 Removing: /var/run/dpdk/spdk_pid71414 01:31:30.031 Removing: /var/run/dpdk/spdk_pid71441 01:31:30.031 Removing: /var/run/dpdk/spdk_pid71471 01:31:30.031 Removing: /var/run/dpdk/spdk_pid71492 01:31:30.031 Removing: /var/run/dpdk/spdk_pid71586 01:31:30.031 Removing: /var/run/dpdk/spdk_pid71716 01:31:30.031 Removing: /var/run/dpdk/spdk_pid71870 01:31:30.031 Removing: /var/run/dpdk/spdk_pid71950 01:31:30.031 Removing: /var/run/dpdk/spdk_pid72139 01:31:30.031 Removing: /var/run/dpdk/spdk_pid72222 01:31:30.031 Removing: /var/run/dpdk/spdk_pid72310 01:31:30.290 Removing: /var/run/dpdk/spdk_pid72670 01:31:30.290 Removing: /var/run/dpdk/spdk_pid73092 01:31:30.290 Removing: /var/run/dpdk/spdk_pid73093 01:31:30.290 Removing: /var/run/dpdk/spdk_pid73094 01:31:30.290 Removing: /var/run/dpdk/spdk_pid73367 01:31:30.290 Removing: /var/run/dpdk/spdk_pid73642 01:31:30.290 Removing: /var/run/dpdk/spdk_pid74031 01:31:30.290 Removing: /var/run/dpdk/spdk_pid74043 01:31:30.290 Removing: /var/run/dpdk/spdk_pid74363 01:31:30.290 Removing: /var/run/dpdk/spdk_pid74383 01:31:30.290 Removing: /var/run/dpdk/spdk_pid74397 01:31:30.290 Removing: /var/run/dpdk/spdk_pid74433 01:31:30.290 Removing: /var/run/dpdk/spdk_pid74438 01:31:30.290 Removing: /var/run/dpdk/spdk_pid74802 01:31:30.290 Removing: /var/run/dpdk/spdk_pid74846 01:31:30.290 Removing: /var/run/dpdk/spdk_pid75179 01:31:30.290 Removing: /var/run/dpdk/spdk_pid75374 01:31:30.290 Removing: /var/run/dpdk/spdk_pid75810 01:31:30.290 Removing: /var/run/dpdk/spdk_pid76368 01:31:30.290 Removing: /var/run/dpdk/spdk_pid77191 01:31:30.290 Removing: /var/run/dpdk/spdk_pid77843 01:31:30.290 Removing: /var/run/dpdk/spdk_pid77850 01:31:30.290 Removing: /var/run/dpdk/spdk_pid79887 01:31:30.290 Removing: /var/run/dpdk/spdk_pid79953 01:31:30.290 Removing: /var/run/dpdk/spdk_pid80008 01:31:30.290 Removing: /var/run/dpdk/spdk_pid80069 01:31:30.290 Removing: /var/run/dpdk/spdk_pid80184 01:31:30.290 Removing: /var/run/dpdk/spdk_pid80244 01:31:30.290 Removing: /var/run/dpdk/spdk_pid80299 01:31:30.290 Removing: /var/run/dpdk/spdk_pid80359 01:31:30.290 Removing: /var/run/dpdk/spdk_pid80732 01:31:30.290 Removing: /var/run/dpdk/spdk_pid81947 01:31:30.290 Removing: /var/run/dpdk/spdk_pid82092 01:31:30.290 Removing: /var/run/dpdk/spdk_pid82334 01:31:30.290 Removing: /var/run/dpdk/spdk_pid82944 01:31:30.290 Removing: /var/run/dpdk/spdk_pid83109 01:31:30.290 Removing: /var/run/dpdk/spdk_pid83275 01:31:30.290 Removing: /var/run/dpdk/spdk_pid83372 01:31:30.290 Removing: /var/run/dpdk/spdk_pid83535 01:31:30.290 Removing: /var/run/dpdk/spdk_pid83649 01:31:30.290 Removing: /var/run/dpdk/spdk_pid84369 01:31:30.290 Removing: /var/run/dpdk/spdk_pid84404 01:31:30.290 Removing: /var/run/dpdk/spdk_pid84439 01:31:30.291 Removing: /var/run/dpdk/spdk_pid84700 01:31:30.291 Removing: /var/run/dpdk/spdk_pid84735 01:31:30.291 Removing: /var/run/dpdk/spdk_pid84766 01:31:30.291 Removing: /var/run/dpdk/spdk_pid85327 01:31:30.291 Removing: /var/run/dpdk/spdk_pid85344 01:31:30.291 Removing: /var/run/dpdk/spdk_pid85584 01:31:30.291 Removing: /var/run/dpdk/spdk_pid85712 01:31:30.291 Removing: /var/run/dpdk/spdk_pid85730 01:31:30.291 Clean 01:31:30.291 05:26:12 -- common/autotest_common.sh@1453 -- # return 0 01:31:30.291 05:26:12 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 01:31:30.291 05:26:12 -- common/autotest_common.sh@732 -- # xtrace_disable 01:31:30.291 05:26:12 -- common/autotest_common.sh@10 -- # set +x 01:31:30.550 05:26:12 -- spdk/autotest.sh@391 -- # timing_exit autotest 01:31:30.550 05:26:12 -- common/autotest_common.sh@732 -- # xtrace_disable 01:31:30.550 05:26:12 -- common/autotest_common.sh@10 -- # set +x 01:31:30.550 05:26:12 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 01:31:30.550 05:26:12 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 01:31:30.550 05:26:12 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 01:31:30.550 05:26:12 -- spdk/autotest.sh@396 -- # [[ y == y ]] 01:31:30.550 05:26:12 -- spdk/autotest.sh@398 -- # hostname 01:31:30.550 05:26:12 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 01:31:30.809 geninfo: WARNING: invalid characters removed from testname! 01:31:57.387 05:26:36 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 01:31:57.387 05:26:39 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 01:31:59.294 05:26:41 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 01:32:01.832 05:26:44 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 01:32:04.382 05:26:46 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 01:32:06.287 05:26:48 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 01:32:08.823 05:26:50 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 01:32:08.823 05:26:50 -- spdk/autorun.sh@1 -- $ timing_finish 01:32:08.823 05:26:50 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 01:32:08.823 05:26:50 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 01:32:08.823 05:26:50 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 01:32:08.823 05:26:50 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 01:32:08.823 + [[ -n 5423 ]] 01:32:08.823 + sudo kill 5423 01:32:08.832 [Pipeline] } 01:32:08.846 [Pipeline] // timeout 01:32:08.852 [Pipeline] } 01:32:08.866 [Pipeline] // stage 01:32:08.870 [Pipeline] } 01:32:08.882 [Pipeline] // catchError 01:32:08.889 [Pipeline] stage 01:32:08.890 [Pipeline] { (Stop VM) 01:32:08.900 [Pipeline] sh 01:32:09.181 + vagrant halt 01:32:11.717 ==> default: Halting domain... 01:32:19.851 [Pipeline] sh 01:32:20.134 + vagrant destroy -f 01:32:22.681 ==> default: Removing domain... 01:32:22.953 [Pipeline] sh 01:32:23.235 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 01:32:23.244 [Pipeline] } 01:32:23.257 [Pipeline] // stage 01:32:23.262 [Pipeline] } 01:32:23.275 [Pipeline] // dir 01:32:23.281 [Pipeline] } 01:32:23.294 [Pipeline] // wrap 01:32:23.297 [Pipeline] } 01:32:23.307 [Pipeline] // catchError 01:32:23.314 [Pipeline] stage 01:32:23.316 [Pipeline] { (Epilogue) 01:32:23.325 [Pipeline] sh 01:32:23.610 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 01:32:28.895 [Pipeline] catchError 01:32:28.896 [Pipeline] { 01:32:28.908 [Pipeline] sh 01:32:29.188 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 01:32:29.188 Artifacts sizes are good 01:32:29.197 [Pipeline] } 01:32:29.210 [Pipeline] // catchError 01:32:29.219 [Pipeline] archiveArtifacts 01:32:29.226 Archiving artifacts 01:32:29.397 [Pipeline] cleanWs 01:32:29.408 [WS-CLEANUP] Deleting project workspace... 01:32:29.408 [WS-CLEANUP] Deferred wipeout is used... 01:32:29.414 [WS-CLEANUP] done 01:32:29.416 [Pipeline] } 01:32:29.429 [Pipeline] // stage 01:32:29.434 [Pipeline] } 01:32:29.446 [Pipeline] // node 01:32:29.451 [Pipeline] End of Pipeline 01:32:29.484 Finished: SUCCESS